Memory at the Core of New Deep Learning Research Chip

Memory at the Core of New Deep Learning Research Chip
April 17, 2017 | Source: The Next Platform, nextplatform.com, 2 February 2017, Nicole Hemsoth

Over the last two years, there has been a push for novel architectures to feed the needs of machine learning and more specifically, deep neural networks.

We have covered the many architectural options for both the training and inference sides of that workload here at The Next Platform, and in doing so, started to notice an interesting trend. Some companies with custom ASICs targeted at that market seemed to be developing along a common thread—using memory as the target for processing.

Processing in memory (PIM) architectures are certainly nothing new, but because the relatively simple logic units inside of memory devices map well to the needs of neural network training (for convolutional nets in particular), memory is becoming the next platform. We have looked at deep learning chips from companies like Nervana Systems (acquired by Intel in 2016) and Wave Computing, and like other new architectures promising a silver bullet to smash through AlexNet and other benchmarks, memory is the key driver for performance and efficiency.