2016:DianNao Family Energy-Efficient Hardware Accelerators for Machine Learning-程序员宅基地

技术标签: pr  

  • 这个发表在了
  • https://cacm.acm.org/
  • communication of the ACM

  • 参考文献链接是
Chen Y , Chen T , Xu Z , et al. 
DianNao Family: Energy-Efficient Hardware Accelerators for Machine Learning[J]. 
Communications of the Acm, 2016, 59(11):105-112
  • https://cacm.acm.org/magazines/2016/11/209123-diannao-family/fulltext
  • 果然找到了他

  • 特码的,我下载了,6不6
The original version of this paper is entitled “DianNao: A
Small-Footprint, High-Throughput Accelerator for Ubiq-
uitous Machine Learning” and was published in Proceed-
ings of the International Conference on Architectural Support
for Programming Languages and Operating Systems (ASPLOS)
49, 4 (March 2014), ACM, New York, NY, 269284.

Abstract

  • ML pervasive
    • broad range of applications
      • broad range of systems(embedded to data centers)

  • computer
    • toward heterogeneous multi-cores
    • a mix of cores and hardware accelerators,
  • designing hardware accelerators for ML
    • achieve high efficiency and broad application scope

第二段

  • efficient computational primitives
    • important for a hardware accelerator,
  • inefficient memory transfers can
    • potentially void the throughput, energy, or cost advantages of accelerators,
  • an Amdahl’s law effect
  • become a first-order concern,

  • like in processors,
    • rather than an element factored in accelerator design on a second step

  • a series of hardware accelerators
    • designed for ML(nn),
    • the impact of memory on accelerator design, performance, and energy.

  • representative neural network layers
  • 450.65x over GPU
  • energy by 150.31x on average
    • for 64-chip DaDianNao (a member of the DianNao family)

1 INTRODUCTION

  • designing hardware accelerators which realize the best possible tradeoff between flexibility and efficiency is becoming a prominent
    issue.

  • The first question is for which category of applications one should primarily design accelerators?
  • Together with the architecture trend towards accelerators, a second simultaneous and significant trend in high-performance and embedded applications is developing: many of the emerging high-performance and embedded applications, from image/video/audio recognition to automatic translation, business analytics, and robotics rely on machine learning
    techniques.
  • This trend in application comes together with a third trend in machine learning (ML) where a small number
    of techniques, based on neural networks (especially deep learning techniques 16, 26 ), have been proved in the past few
    years to be state-of-the-art across a broad range of applications.
  • As a result, there is a unique opportunity to design accelerators having significant application scope as well as
    high performance and efficiency. 4

第二段

  • Currently, ML workloads
  • mostly executed on
    • multicores using SIMD[44]
    • on GPUs[7]
    • or on FPGAs[2]

  • the aforementioned trends
    • have already been identified
    • by researchers who have proposed accelerators implementing,
  • CNNs[2]
  • Multi-Layer Perceptrons [43] ;

  • accelerators focusing on other domains,
    • image processing,
    • propose efficient implementations of some of the computational primitives used
    • by machine-learning techniques, such as convolutions[37]

  • There are also ASIC implementations of ML
    • such as Support Vector Machine and CNNs.

  • these works focused on
    • efficiently implementing the computational primitives
      • ignore memory transfers for the sake of simplicity[37,43]
      • plug their computational accelerator to memory via a more or less sophisticated DMA. [2,12,19]

第三段

  • While efficient implementation of computational primitives is a first and important step with promising results,
    inefficient memory transfers can potentially void the throughput, energy, or cost advantages of accelerators, that is, an
    Amdahl’s law effect, and thus, they should become a first-
    order concern, just like in processors, rather than an element
    factored in accelerator design on a second step.

  • Unlike in processors though, one can factor in the specific nature of
    memory transfers in target algorithms, just like it is done for accelerating computations.

  • This is especially important in the domain of ML where there is a clear trend towards scaling up the size of learning models in order to achieve better accuracy and more functionality. 16, 24

第四段

  • In this article, we introduce a series of hardware accelerators designed for ML (especially neural networks), including
    DianNao, DaDianNao, ShiDianNao, and PuDianNao as listed in Table 1.
  • We focus our study on memory usage, and we investigate the accelerator architecture to minimize memory
    transfers and to perform them as efficiently as possible.

2 DIANNAO: A NN ACCELERATOR

  • DianNao
    • first of DianNao accelerator family,
  • accommodates sota nn techniques (dl ),
  • inherits the broad application scope of nn.

2.1 Architecture

  • DianNao
    • input buffer for input (NBin)
    • output buffer for output (NBout)
    • buffer for synaptic(突触) weights (SB)
    • connected to a computational block (performing both synapses and neurons computations)
    • NFU, and CP, see Figure 1

NBin是存放输入神经元
SB是存放突触的权重的
这个NBout是存放输出神经元

我觉得图示的可以这样理解:2个输入神经元,2个突触,将这2个对应乘起来,输出是1个神经元啊。但是我的这个NFU牛逼啊,他可以一次性求两个输出神经元。

NFU

  • a functional block of T i T_i Ti inputs/synapses(突触)
    • T n T_n Tn output neurons,
  • time-shared by different algorithmic blocks of neurons.

这个NFU对 T i T_i Ti个输入和突触运算,得到 T n T_n Tn个输出神经元,突触不是应该是 T i × T n T_i\times T_n Ti×Tn个吗??,

  • Depending on the layer type,
    • computations at the NFU can be decomposed in either two or three stages

  • For classifier and convolutional:
    • multiplication of synapses × \times × inputs:NFU-1
    • , additions of all multiplications, :NFU-2
    • sigmoid. :NFU-3

如果是分类层或者卷积的话的话,那就是简单的突触 × \times × 输入,然后加起来,求sigmoid。这个我能理解哦,这种情况不就是卷积吗。

如果是分类层,那么输入就是

  • last stage (sigmoid or another nonlinear function) can vary.

  • For pooling, no multiplication(no synapse),
    • pooling can be average or max.

  • adders(加法器) have multiple inputs,
    • they are in fact adder trees,

  • the second stage also contains
    • shifters and max operators for pooling.

要啥移位啊??

  • the sigmoid function (for classifier and convolutional layers)can be efficiently implemented using ( f ( x ) = a i x × + b i , x ∈ [ x i , x i + 1 ] f(x) = a_i x \times + b_i , x \in [x_i , x_{i+1} ] f(x)=aix×+bi,x[xi,xi+1]) (16 segments are sufficient)

On-chip Storage

  • on-chip storage structures of DianNao
    • can be construed as modified buffers of scratchpads.

  • While a cache is an excellent storage structure for a general-purpose processor, it is a sub-optimal way to exploit reuse because of the cache access overhead (tag check, associativity, line size, speculative read, etc.) and cache conflicts.
  • The efficient alternative, scratchpad, is used in VLIW processors but it is known to be very difficult to compile for.
  • However a scratchpad in a dedicated accelerator realizes the best of both worlds: efficient
    storage, and both efficient and easy exploitation of locality because only a few algorithms have to be manually adapted.
第二段
  • on-chip storage into three (NBin, NBout,and SB), because there are three type of data (input neurons,output neurons and synapses) with different characteristics (read width and reuse distance).

  • The first benefit of splitting structures is to tailor the SRAMs to the appropriate
    read/write width,
  • and the second benefit of splitting storage structures is to avoid conflicts, as would occur in a cache.
  • Moreover, we implement three DMAs to exploit spatial locality of data, one for each buffer (two load DMAs for inputs, one store DMA for outputs).

2.2 Loop tiling

  • DianNao 用 loop tiling去减少memory access
    • so可容纳大的神经网络
  • 举例
    • 一个classifier 层
      • N n N_n Nn输出神经元
      • 全连接到 N i N_i Ni的输入
      • 如下图

N n N_n Nn个输出, N i N_i Ni个输入,sypase应该是 N n × N i N_n\times N_i Nn×Ni大小,用这个矩阵 × N i \times N_i ×Ni即可得到结果啊

  • 先取出来一块
    • 有点疑惑啊
    • 万一右边第一个元素和左边全部元素都有关
    • 你咋算啊 ()
    • 其实啊,我他妈算右边第一个时候
    • 只需要用到和synapse的一行呀!
    • 那你那个大大的synapse矩阵咋办啊
      在这里插入图片描述
  • 下面是原始代码和和
    • tiled代码
    • 他把分类层映射到DianNao

在这里插入图片描述

for(int n=0;n<Nn;n++)
	sum[n]=0;
for(int n=0;n<Nn;n++) //输出神经元
	for(int i=0;i<Ni;i++) //输入神经元
		sum[n]+=synapse[n][i]*neuron[i];
for(int n=0;n<Nn;n++)
	neuron[n]=Sigmoid(sum[n]);		
  • 俺的想法:
    • 一次来Tnn个输出
    • 和Tii个输入
    • 然后这个东西对于硬件还是太大了
    • 再拆
    • 来Tn个和Ti个吧
    • 就酱
for(int nnn=0;nnn<Nn;nnn+=Tnn){
    //tiling for output 神经元
//第一个for循环准备扔出去Tnn个输出
    for(int iii=0;iii<Ni;iii+=Tii){
    //tiling for input 神经元
//第二个for循环准备扔进来Tii个输入
//下面就这两个东西动手

        for(int nn=nnn;nn<nnn+Tnn;nn+=Tn){
    
//第三个for循环觉得觉得Tnn还是太大了,继续拆
//大小是Tn
//那么我们对每一个Tnn块!(开始位置是nn哦!!)
//我们如下求解

///
            for(int n=nn;n<nn+Tn;n++)
//第一步把中间结果全部搞成零!

            sum[n]=0;
//为求sum[n],sum[n]=synapse的第n行乘neuron的全部啊!
        for(int ii=iii;ii<iii+Tii;ii+=Ti)

//上面的for是对Ti进行拆

            for(int n=nn;n<nn+Tn;n++)
                for(int i=ii;i<ii+Ti;i++)
                    sum[n]+=synapse[n][i]*neuron[i];

    for(int nn=nnn;nn<nnn+Tnn;nn+=Tn)
        neuron[n]=sigmoid(sum[n]);
///

 }   }  }
  • 在tiled代码中, i i ii ii n n nn nn
    • 表示NFU有 T i T_i Ti个输入和突触
      • T n T_n Tn个输出神经元
  • 输入神经元被每个输出神经元需要重用
    • 但这个输入向量也太他妈大了
    • 塞不到Nbin块里啊
    • 所以也要对循环 i i ii ii分块,因子 T i i T_{ii} Tii

上面的代码肯定有问题,正确的如下:

	for (int nnn = 0; nnn < Nn; nnn += Tnn) {
    
		for (int nn = nnn; nn < nnn + Tnn; nn += Tn) {
    
			for (int n = nn; n < nn + Tn; n++)
				sum[n] = 0;
			for (int iii = 0; iii < Ni; iii += Tii) {
    
				for (int ii = iii; ii < iii + Tii; ii += Ti)
					for (int n = nn; n < nn + Tn; n++)
						for (int i = ii; i < ii + Ti; i++)
							sum[n] += synapse[n][i] * neuron[i];
			}
			for (int n = nn; n < nn + Tn; n++)
				printf("s%ds ", sum[n]);
		}
	}
	for (int index = 0; index < Nn; index++)
		printf("%d ", sum[index]);
版权声明:本文为博主原创文章,遵循 CC 4.0 BY-SA 版权协议,转载请附上原文出处链接和本声明。
本文链接:https://blog.csdn.net/zhoutianzi12/article/details/110244427

智能推荐

Android六大基本布局详解_安卓六大布局-程序员宅基地

文章浏览阅读5.5k次。Android六大基本布局详解_安卓六大布局

Java事件监听设计模式(通俗易懂)_java监听事件-程序员宅基地

文章浏览阅读1.6k次。事情监听设计模式(通俗易懂)这样一个场景:小狗要吃饭,爸爸妈妈收到他吃饭的消息有哪些类呢Dog:小狗,要被人监听了(事件源)PeopleEventLister:监听者的抽象接口,到时具体哪个类要监听小狗吃饭就要实现他PeopleListersManager:管理监听者的类代码小狗类package com.example.demo.event;import java.util.EventObject;/*** @program: demo* @description:_java监听事件

第十七节 Linux系统编程-开发板实现 TFTP 文件传输(一)_易百纳ss928开发板实现tftp传输-程序员宅基地

文章浏览阅读1.4k次。-------------------------------------资源来源于网络,仅供自学使用,如有侵权,联系我必删.第一:本章导读本章介绍如何使用 TFTP 服务器在开发板和虚拟机的 Ubuntu 进行传文件同一网段的概念概念需要结合 IP 地址以及子网掩码1)IP 地址:下图 Ubuntu 的 IP 地址是 192.168.0.109再来看一..._易百纳ss928开发板实现tftp传输

Qt快捷键_qt调到最后一行代码快捷键-程序员宅基地

文章浏览阅读280次。Shift + Home 选中当前位置到行首的所有内容shift + End 选中当前位置到行尾的所有内容alt + enter 为声明添加定义,或者为定义添加声明ctrl + alt + up 将当前行复制到上一行ctrl + alt + down 将当前行复制到下一行Ctrl + Shift + R 全局修改变量或者函数名Ctrl + ..._qt调到最后一行代码快捷键

RK3588平台开发系列讲解(I/O篇)Linux 磁盘 I/O 的性能指标_rk3588 io口速率-程序员宅基地

文章浏览阅读1.1k次,点赞31次,收藏18次。事实上,饱和度通常也没有其他简单的观测方法,不过,你可以把观测到的,平均请求队列长度或者读写请求完成的等待时间,跟基准测试的结果(比如通过 fio)进行对比,综合评估磁盘的饱和情况。剩下的部分,则是从各个角度来分别表示进程的 I/O 情况,包括线程 ID、I/O 优先级、每秒读磁盘的大小、每秒写磁盘的大小、换入和等待 I/O 的时钟百分比等。一般来说,我们在为应用程序的服务器选型时,要先对磁盘的 I/O 性能进行基准测试,以便可以准确评估,磁盘性能是否可以满足应用程序的需求。_rk3588 io口速率

CT重建概念和算法详细解析_ct图像重建算法-程序员宅基地

文章浏览阅读6.4k次,点赞9次,收藏88次。从左到右分别为:反投影法,滤波反投影法,傅里叶变换。_ct图像重建算法

随便推点

html+css实例总结--遮罩、轮播图的实现_css 轮播图图片没有完全遮住-程序员宅基地

文章浏览阅读1k次,点赞2次,收藏10次。用html和css结合实现遮罩图和轮播图_css 轮播图图片没有完全遮住

PCB中加入任意LOGO图文说明_pcb板防静电标识-程序员宅基地

文章浏览阅读1.3k次。我们在网上找到任意一张图片,我找的是防静电图(原文件名:防静电.jpg) 首先我们要对下载下来的图片进行处理否则Altium designer6.9会提示装载的图片不是单色的,用Photoshop CS打开开始下载的图片(原文件名:试图1.jpg) 选择 图像→模式→灰度(原文件名:试图2.jpg) 在选择 图像→模式→位图(原文_pcb板防静电标识

Sobel算子边缘检测原理及实现-程序员宅基地

文章浏览阅读1.1w次,点赞11次,收藏63次。写在前面Prewitt算子同样是一种一阶微分算子,它的卷积算子和Prewitt算子非常类似,仅仅是系数不同,但Sobel算子对于像素位置的影响做了加权,与Prewitt算子、Roberts算子相比效果更好。优点对边缘定位较为准确,能较好地处理灰度渐变和噪声较多的图像,计算简单,可分别计算水平和垂直边缘,如EasyPR用其定位车牌。原理首先我们看Sobel算子: ..._sobel算子边缘检测原理

Matlab中实现均匀量化_matlab量化-程序员宅基地

文章浏览阅读6.1k次,点赞9次,收藏39次。这项任务的目的是设计一个Matlab程序来进行A/D和D/A转换,并对特定的信号进行采样、量化和去采样的过程。1. 生成初始信号利用自己的学号生成特定的输入信号.%SID = 21059653;A_1 = 2+1+0+5;A_2 = 9+6+5+3;% transfer last 4 digits to 16-bit binarylast4 = 9653;last4_bin = dec2bin(last4,16);f_1 = count(last4_bin,'1');f_2 = _matlab量化

【每天一个java设计模式(十三)】 - 模板模式_java设计模式模板的具体实现-程序员宅基地

文章浏览阅读3.6k次。在模板模式中,一个抽象类公开定义了执行它的方法的方式/模板。它的子类可以按需要重写方法实现,但调用将以抽象类中定义的方式进行。这种类型的设计模式属于行为型模式。_java设计模式模板的具体实现

可能是全网最全,JAVA日志框架适配、冲突解决方案,可以早点下班了!-程序员宅基地

文章浏览阅读106次。点击上方“Java基基”,选择“设为星标”做积极的人,而不是积极废人!每天14:00更新文章,每天掉亿点点头发...源码精品专栏原创 | Java 2021超神之路,很肝~中文详细..._pro java logging

推荐文章

热门文章

相关标签