-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathatom.xml
85 lines (46 loc) · 55.9 KB
/
atom.xml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom">
<title>北叹荒歌</title>
<subtitle>计算机视觉炼丹纪</subtitle>
<link href="/atom.xml" rel="self"/>
<link href="http://yoursite.com/"/>
<updated>2020-03-25T07:27:32.090Z</updated>
<id>http://yoursite.com/</id>
<author>
<name>Bruce</name>
</author>
<generator uri="https://hexo.io/">Hexo</generator>
<entry>
<title>目标检测——SSD</title>
<link href="http://yoursite.com/2020/03/25/%E7%9B%AE%E6%A0%87%E6%A3%80%E6%B5%8B%E2%80%94%E2%80%94SSD/"/>
<id>http://yoursite.com/2020/03/25/%E7%9B%AE%E6%A0%87%E6%A3%80%E6%B5%8B%E2%80%94%E2%80%94SSD/</id>
<published>2020-03-25T07:27:25.009Z</published>
<updated>2020-03-25T07:27:32.090Z</updated>
<content type="html"><![CDATA[<p><em>SSD借鉴了yolov1的one-stage思想,直接用一个神经网络对bbox进行目标分类和回归定位,同时也引入了Faster RCNN中的anchor机制来提升精度。</em><br><strong>那么,针对yolov1精度低、定位不准、小目标检测差的问题,SSD做了哪些改进?</strong></p><h1 id="一-Model"><a href="#一-Model" class="headerlink" title="一 Model"></a>一 Model</h1><p><strong>创新点</strong></p><h2 id="1-多尺度特转图预测"><a href="#1-多尺度特转图预测" class="headerlink" title="1.多尺度特转图预测"></a>1.多尺度特转图预测</h2><p>因为不同size的feature map的感受野不同,可以检测不同大小的目标。较大的map,感受野小,适合检测相对较小的目标;较小的map,感受野大,适合检测相对较大的目标。(<em>anchor的设计正是依据感受野</em>)</p><p>在SSD中,比如一个8x8的map(如图)可以划分为更多的单元格,每个grid的anchor尺度较小,更适合检测小目标。(此处也表明了为什么map较大的,感受野较小)<br><img src="https://img-blog.csdnimg.cn/20200324214700442.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3dhbmdkczAwMA==,size_16,color_FFFFFF,t_70" alt="在这里插入图片描述"><br>而在4x4的map中,划分的单元格较少,每个grid的anchor尺度较大,更适合检测大目标。其中,anchor的设计要求,在初始标注ground truth时,需将gt的尺寸规格赋予map中一系列固定输出的bboxes中。</p><p><strong><em>多尺度预测使得SSD相比于yolov1对小目标的检测更稳定,但是SSD本身对小目标的检测还是存在天然缺陷。为什么?</em></strong></p><ul><li>主要原因有以下2点:<br>(1)==SSD对底层特征利用不充分;==<br>SSD是一种全卷积的检测器,用不同层检测不同大小的目标,但是这中间存在一个矛盾点——<strong><em>底层map较大,分辨率更高,但语义信息不够丰富;深层的语义信息够了,但经过多次的pooling之后,map又太小了。</em></strong> 而检测小目标,既需要一张足够大的map来提供精细的特征和更加密集的采样,同时也需要足够的语义信息来与背景区分开。而在SSD中,负责检测小目标的底层特征语义信息不够丰富,之后又变得较小,其位置信息有较大的损失,导致之后对小目标的检测和回归无法满足要求。<br>(2)==对anchor的设置不是很合理。==<br>原因1所说的矛盾,如果卷积足够深,影响其实也没那么大。但在SSD中,作者设置检测小目标的anchor为0.2,对于一张720p的image,其最小检测尺寸就有144个像素,还是太大了。对于这一点,可以在相应特征层中生成不同尺度的anchor,基本能覆盖足够小的目标就可以,但此时anchor的数量就会增加,速度会降低。<h2 id="2-采用卷积方式预测"><a href="#2-采用卷积方式预测" class="headerlink" title="2.采用卷积方式预测"></a>2.采用卷积方式预测</h2>yolov1最后采用FC层预测,SSD则是将6个不同尺度的map分别输入到两个3x3的卷积中进行结果预测,这样可以适用于各种size的image。</li></ul><p>基础网络采用VGG作为backbone:<br><img src="https://img-blog.csdnimg.cn/20200325121547520.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3dhbmdkczAwMA==,size_16,color_FFFFFF,t_70#pic_center" alt="在这里插入图片描述"><br>该架构中需要<strong>注意</strong>的几点:</p><ul><li>将原来的FC6、FC7分别转换为3x3、1x1的卷积,并使用FC6、FC7的参数初始化卷积层。</li><li>将VGG中stride=2的2x2 pool5变换成stride=1的3x3 pool。</li><li>转换后的conv6的卷积采用的是dilation_rate=6的空洞卷积,用于弥补感受野。</li></ul><p>最终的检测,SSD是将6个feature map分别输入到两个3x3的卷积操作中,classifier的卷积输出维度为:anchor_num x 21.<br>regressor的卷积输出维度为:anchor_num x 4.</p><h2 id="3-Anchor的设计"><a href="#3-Anchor的设计" class="headerlink" title="3.Anchor的设计"></a>3.Anchor的设计</h2><p>以feature map上每个点的中心点来生成一系列同心的default boxes(然后,中心点的坐标会乘以step,相当于从map位置映射回原图位置)。使用6个不同大小的map来做预测,最低层的map的scale位置为S<em>min</em>=0.2,最高层的为S<em>max</em>=0.95,其它层通过以下公式进行设计:<br><img src="https://img-blog.csdnimg.cn/2020032512382190.png" alt="在这里插入图片描述"><br>使用不同的ratio值,a<em>r</em>包括[1, 2, 3, 1/2, 1/3],通过下面的公式计算 default box 的宽度w和高度h:<br><img src="https://img-blog.csdnimg.cn/20200325124552217.png" alt="在这里插入图片描述"><br>另外,每个格子还预测两个正方形default box为:<br><img src="https://img-blog.csdnimg.cn/20200325124701854.png" alt="在这里插入图片描述"><br>所以预测的6个框为:<br><img src="https://img-blog.csdnimg.cn/20200325124741685.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3dhbmdkczAwMA==,size_16,color_FFFFFF,t_70" alt="在这里插入图片描述"></p><h1 id="二-Training"><a href="#二-Training" class="headerlink" title="二 Training"></a>二 Training</h1><h2 id="1-Default-boxes匹配策略"><a href="#1-Default-boxes匹配策略" class="headerlink" title="1.Default boxes匹配策略"></a>1.Default boxes匹配策略</h2><p>在training时,需要将每个anchor与gt匹配,与gt匹配的anchor所对应的box负责预测目标。</p><p>在yolov1中,gt的中心落在哪个grid,该grid中与gt IoU最大的box负责预测,但在SSD中,SSD的anchor与gt的匹配策略原则主要有以下2点:<br>(1)对于每个gt,找到与其IoU最大的default box与其匹配,相匹配的anchor为正样本,反之为负样本。但是gt很少,anchor却很多,仅仅按照这一原则匹配,很多anchor会被划分为负样本,导致正负样本不平衡,所以还需要第二个原则;<br>(2)对于剩余的anchor,若与某个gt的IoU>0.5,那么该anchor也与这个gt匹配。这意味着gt可能与多个anchor匹配,这是可以的。但反过来则不行,一个anchor只能匹配一个gt。如果多个gt与某个anchor的IoU都大于0.5,那么anchor只与IoU最大的那个gt匹配。</p><h2 id="2-难负例挖掘-amp-数据增强"><a href="#2-难负例挖掘-amp-数据增强" class="headerlink" title="2.难负例挖掘&数据增强"></a>2.难负例挖掘&数据增强</h2><p>尽管一个gt可与多个anchor匹配,但是gt相对于anchor还是太少,所以负样本相比于正样本会很多。为保证正负样本平衡,SSD采用hard negative mining,对负样本抽样,按照置信度误差进行降序排列,选取置信度误差较大(预测背景的confidence越小,误差越大)的top-k作为训练的负样本。</p><p>SSD中,数据增强主要采用的技术有水平翻转、随机crop+颜色扭曲、随机采集块域等。</p><h2 id="3-损失函数"><a href="#3-损失函数" class="headerlink" title="3.损失函数"></a>3.损失函数</h2><p>SSD训练的目标函数源于Multi Box的目标函数,SSD对其进行了扩展,使其可以处理多个目标类别。</p><p>loss分为两部分:置信度误差(confidence loss)+位置误差(localization loss)。<br><img src="https://img-blog.csdnimg.cn/20200325152121341.png" alt="在这里插入图片描述"><br>其中置信度误差采用的是softmax loss:<br><img src="https://img-blog.csdnimg.cn/2020032515245684.png" alt="在这里插入图片描述"><br>位置误差采用的是smooth L1 loss:<br><img src="https://img-blog.csdnimg.cn/20200325152258638.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3dhbmdkczAwMA==,size_16,color_FFFFFF,t_70" alt="在这里插入图片描述"></p>]]></content>
<summary type="html">
<p><em>SSD借鉴了yolov1的one-stage思想,直接用一个神经网络对bbox进行目标分类和回归定位,同时也引入了Faster RCNN中的anchor机制来提升精度。</em><br><strong>那么,针对yolov1精度低、定位不准、小目标检测差的问题,SS
</summary>
</entry>
<entry>
<title>基于VGG-16训练自己的数据集进行图像分类</title>
<link href="http://yoursite.com/2020/03/24/%E5%9F%BA%E4%BA%8EVGG-16%E8%AE%AD%E7%BB%83%E8%87%AA%E5%B7%B1%E7%9A%84%E6%95%B0%E6%8D%AE%E9%9B%86%E8%BF%9B%E8%A1%8C%E5%9B%BE%E5%83%8F%E5%88%86%E7%B1%BB/"/>
<id>http://yoursite.com/2020/03/24/%E5%9F%BA%E4%BA%8EVGG-16%E8%AE%AD%E7%BB%83%E8%87%AA%E5%B7%B1%E7%9A%84%E6%95%B0%E6%8D%AE%E9%9B%86%E8%BF%9B%E8%A1%8C%E5%9B%BE%E5%83%8F%E5%88%86%E7%B1%BB/</id>
<published>2020-03-24T06:12:58.657Z</published>
<updated>2020-03-24T06:13:05.807Z</updated>
<content type="html"><![CDATA[<h1 id="具体步骤"><a href="#具体步骤" class="headerlink" title="具体步骤"></a>具体步骤</h1><h2 id="1-数据处理"><a href="#1-数据处理" class="headerlink" title="1.数据处理"></a>1.数据处理</h2><pre><code>数据介绍</code></pre><p>首先准备自及欲分类的数据集。本文中我使用的是自己采集的数据集<<strong>车架号拓片</strong>>,该数据集包括<strong>数字10类</strong>,<strong>字母19类</strong>。</p><pre><code>数据转换</code></pre><p>将datasets转换为tfrecord格式(这里说明一下,注意有时转换出的数据集可能会是0字节,需要重新转换。)<br>文件格式:<br>—img<br>——文件夹0<br>———xxx.jpg<br>——文件夹1<br>———xxx.jpg<br>. . . . . .<br>——文件夹29<br>———xxx.jpg<br><strong>create_tfrecords.py</strong></p><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br><span class="line">39</span><br><span class="line">40</span><br><span class="line">41</span><br><span class="line">42</span><br><span class="line">43</span><br><span class="line">44</span><br><span class="line">45</span><br><span class="line">46</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">import</span> os</span><br><span class="line"><span class="keyword">import</span> tensorflow <span class="keyword">as</span> tf</span><br><span class="line"><span class="keyword">from</span> PIL <span class="keyword">import</span> Image</span><br><span class="line"><span class="keyword">import</span> sys</span><br><span class="line"><span class="keyword">import</span> matplotlib.pyplot <span class="keyword">as</span> plt</span><br><span class="line"></span><br><span class="line"><span class="function"><span class="keyword">def</span> <span class="title">creat_tf</span><span class="params">(imgpath)</span>:</span></span><br><span class="line"> </span><br><span class="line"> cwd = os.getcwd()</span><br><span class="line"> classes = os.listdir(cwd + imgpath)</span><br><span class="line"> </span><br><span class="line"> <span class="comment"># 此处定义tfrecords文件存放</span></span><br><span class="line"> writer = tf.python_io.TFRecordWriter(<span class="string">"train.tfrecords"</span>)</span><br><span class="line"> <span class="keyword">for</span> index, name <span class="keyword">in</span> enumerate(classes):</span><br><span class="line"> class_path = cwd + imgpath + name + <span class="string">"/"</span></span><br><span class="line"> print(class_path)</span><br><span class="line"> <span class="keyword">if</span> os.path.isdir(class_path):</span><br><span class="line"> <span class="keyword">for</span> img_name <span class="keyword">in</span> os.listdir(class_path):</span><br><span class="line"> img_path = class_path + img_name</span><br><span class="line"> img = Image.open(img_path)</span><br><span class="line"> img = img.resize((<span class="number">224</span>, <span class="number">224</span>))</span><br><span class="line"> img_raw = img.tobytes() </span><br><span class="line"> example = tf.train.Example(features=tf.train.Features(feature={</span><br><span class="line"> <span class="string">'label'</span>: tf.train.Feature(int64_list=tf.train.Int64List(value=[int(name)])),</span><br><span class="line"> <span class="string">'img_raw'</span>: tf.train.Feature(bytes_list=tf.train.BytesList(value=[img_raw]))</span><br><span class="line"> }))</span><br><span class="line"> writer.write(example.SerializeToString()) </span><br><span class="line"> print(img_name)</span><br><span class="line"> writer.close()</span><br><span class="line"> </span><br><span class="line"><span class="function"><span class="keyword">def</span> <span class="title">read_example</span><span class="params">()</span>:</span></span><br><span class="line"> <span class="comment">#简单的读取例子:</span></span><br><span class="line"> <span class="keyword">for</span> serialized_example <span class="keyword">in</span> tf.python_io.tf_record_iterator(<span class="string">"train.tfrecords"</span>):</span><br><span class="line"> example = tf.train.Example()</span><br><span class="line"> example.ParseFromString(serialized_example)</span><br><span class="line"> </span><br><span class="line"> <span class="comment">#image = example.features.feature['img_raw'].bytes_list.value</span></span><br><span class="line"> label = example.features.feature[<span class="string">'label'</span>].int64_list.value</span><br><span class="line"> <span class="comment"># 可以做一些预处理之类的</span></span><br><span class="line"> <span class="comment"># print(label)</span></span><br><span class="line"></span><br><span class="line"> </span><br><span class="line"><span class="keyword">if</span> __name__ == <span class="string">'__main__'</span>:</span><br><span class="line"> imgpath = <span class="string">'./img/'</span></span><br><span class="line"> creat_tf(imgpath)</span><br><span class="line"> <span class="comment">#read_example()</span></span><br></pre></td></tr></table></figure><h2 id="2-加载模型及调试训练"><a href="#2-加载模型及调试训练" class="headerlink" title="2.加载模型及调试训练"></a>2.加载模型及调试训练</h2><p>VGG16预训练模型<a href="https://baiduyunpan" target="_blank" rel="noopener">vgg16.npy</a><br><strong>VGG16.py</strong></p><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br><span class="line">39</span><br><span class="line">40</span><br><span class="line">41</span><br><span class="line">42</span><br><span class="line">43</span><br><span class="line">44</span><br><span class="line">45</span><br><span class="line">46</span><br><span class="line">47</span><br><span class="line">48</span><br><span class="line">49</span><br><span class="line">50</span><br><span class="line">51</span><br><span class="line">52</span><br><span class="line">53</span><br><span class="line">54</span><br><span class="line">55</span><br><span class="line">56</span><br><span class="line">57</span><br><span class="line">58</span><br><span class="line">59</span><br><span class="line">60</span><br><span class="line">61</span><br><span class="line">62</span><br><span class="line">63</span><br><span class="line">64</span><br><span class="line">65</span><br><span class="line">66</span><br><span class="line">67</span><br><span class="line">68</span><br><span class="line">69</span><br><span class="line">70</span><br><span class="line">71</span><br><span class="line">72</span><br><span class="line">73</span><br><span class="line">74</span><br><span class="line">75</span><br><span class="line">76</span><br><span class="line">77</span><br><span class="line">78</span><br><span class="line">79</span><br><span class="line">80</span><br><span class="line">81</span><br><span class="line">82</span><br><span class="line">83</span><br><span class="line">84</span><br><span class="line">85</span><br><span class="line">86</span><br><span class="line">87</span><br><span class="line">88</span><br><span class="line">89</span><br><span class="line">90</span><br><span class="line">91</span><br><span class="line">92</span><br><span class="line">93</span><br><span class="line">94</span><br><span class="line">95</span><br><span class="line">96</span><br><span class="line">97</span><br><span class="line">98</span><br><span class="line">99</span><br><span class="line">100</span><br><span class="line">101</span><br><span class="line">102</span><br><span class="line">103</span><br><span class="line">104</span><br><span class="line">105</span><br><span class="line">106</span><br><span class="line">107</span><br><span class="line">108</span><br><span class="line">109</span><br><span class="line">110</span><br><span class="line">111</span><br><span class="line">112</span><br><span class="line">113</span><br><span class="line">114</span><br><span class="line">115</span><br><span class="line">116</span><br><span class="line">117</span><br><span class="line">118</span><br><span class="line">119</span><br><span class="line">120</span><br><span class="line">121</span><br><span class="line">122</span><br><span class="line">123</span><br><span class="line">124</span><br><span class="line">125</span><br><span class="line">126</span><br><span class="line">127</span><br><span class="line">128</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">import</span> tensorflow <span class="keyword">as</span> tf</span><br><span class="line"><span class="keyword">import</span> numpy <span class="keyword">as</span> np </span><br><span class="line">tf.reset_default_graph()</span><br><span class="line"> </span><br><span class="line"><span class="comment"># 加载预训练模型</span></span><br><span class="line">data_dict = np.load(<span class="string">'./vgg16.npy'</span>, encoding=<span class="string">'latin1'</span>).item()</span><br><span class="line"></span><br><span class="line"><span class="comment"># 打印每层信息</span></span><br><span class="line"><span class="function"><span class="keyword">def</span> <span class="title">print_layer</span><span class="params">(t)</span>:</span></span><br><span class="line"> print(t.op.name, <span class="string">' '</span>, t.get_shape().as_list(), <span class="string">'\n'</span>)</span><br><span class="line"> </span><br><span class="line"><span class="comment"># 定义卷积层</span></span><br><span class="line"><span class="string">"""</span></span><br><span class="line"><span class="string">此处权重初始化定义了3种方式:</span></span><br><span class="line"><span class="string"> 1.预训练模型参数</span></span><br><span class="line"><span class="string"> 2.截尾正态,参考书上采用该方式</span></span><br><span class="line"><span class="string"> 3.xavier,网上blog有采用该方式</span></span><br><span class="line"><span class="string">通过参数finetrun和xavier控制选择哪种方式,有兴趣的可以都试试</span></span><br><span class="line"><span class="string">"""</span></span><br><span class="line"><span class="function"><span class="keyword">def</span> <span class="title">conv</span><span class="params">(x, d_out, name, fineturn=False, xavier=False)</span>:</span></span><br><span class="line"> d_in = x.get_shape()[<span class="number">-1</span>].value</span><br><span class="line"> <span class="keyword">with</span> tf.name_scope(name) <span class="keyword">as</span> scope:</span><br><span class="line"> <span class="comment"># Fine-tuning </span></span><br><span class="line"> <span class="keyword">if</span> fineturn:</span><br><span class="line"> kernel = tf.constant(data_dict[name][<span class="number">0</span>], name=<span class="string">"weights"</span>)</span><br><span class="line"> bias = tf.constant(data_dict[name][<span class="number">1</span>], name=<span class="string">"bias"</span>)</span><br><span class="line"> <span class="comment">#print("fineturn")</span></span><br><span class="line"> <span class="keyword">elif</span> <span class="keyword">not</span> xavier:</span><br><span class="line"> kernel = tf.Variable(tf.truncated_normal([<span class="number">3</span>, <span class="number">3</span>, d_in, d_out], stddev=<span class="number">0.1</span>), name=<span class="string">'weights'</span>)</span><br><span class="line"> bias = tf.Variable(tf.constant(<span class="number">0.0</span>, dtype=tf.float32, shape=[d_out]),</span><br><span class="line"> trainable=<span class="literal">True</span>, </span><br><span class="line"> name=<span class="string">'bias'</span>)</span><br><span class="line"> <span class="comment">#print("truncated_normal")</span></span><br><span class="line"> <span class="keyword">else</span>:</span><br><span class="line"> kernel = tf.get_variable(scope+<span class="string">'weights'</span>, shape=[<span class="number">3</span>, <span class="number">3</span>, d_in, d_out], </span><br><span class="line"> dtype=tf.float32,</span><br><span class="line"> initializer=tf.contrib.layers.xavier_initializer_conv2d())</span><br><span class="line"> bias = tf.Variable(tf.constant(<span class="number">0.0</span>, dtype=tf.float32, shape=[d_out]),</span><br><span class="line"> trainable=<span class="literal">True</span>, </span><br><span class="line"> name=<span class="string">'bias'</span>)</span><br><span class="line"> <span class="comment">#print("xavier")</span></span><br><span class="line"> conv = tf.nn.conv2d(x, kernel,[<span class="number">1</span>, <span class="number">1</span>, <span class="number">1</span>, <span class="number">1</span>], padding=<span class="string">'SAME'</span>)</span><br><span class="line"> activation = tf.nn.relu(conv + bias, name=scope)</span><br><span class="line"> <span class="comment">#print_layer(activation)</span></span><br><span class="line"> <span class="keyword">return</span> activation</span><br><span class="line"> </span><br><span class="line"><span class="comment"># 最大池化层</span></span><br><span class="line"><span class="function"><span class="keyword">def</span> <span class="title">maxpool</span><span class="params">(x, name)</span>:</span></span><br><span class="line"> activation = tf.nn.max_pool(x, [<span class="number">1</span>, <span class="number">2</span>, <span class="number">2</span>, <span class="number">1</span>], [<span class="number">1</span>, <span class="number">2</span>, <span class="number">2</span>, <span class="number">1</span>], padding=<span class="string">'VALID'</span>, name=name) </span><br><span class="line"> <span class="comment">#print_layer(activation)</span></span><br><span class="line"> <span class="keyword">return</span> activation</span><br><span class="line"> </span><br><span class="line"><span class="comment"># 定义全连接层</span></span><br><span class="line"><span class="string">"""</span></span><br><span class="line"><span class="string">此处权重初始化定义了3种方式:</span></span><br><span class="line"><span class="string"> 1.预训练模型参数</span></span><br><span class="line"><span class="string"> 2.截尾正态,参考书上采用该方式</span></span><br><span class="line"><span class="string"> 3.xavier,网上blog有采用该方式</span></span><br><span class="line"><span class="string">通过参数finetrun和xavier控制选择哪种方式,有兴趣的可以都试试</span></span><br><span class="line"><span class="string">"""</span></span><br><span class="line"><span class="function"><span class="keyword">def</span> <span class="title">fc</span><span class="params">(x, n_out, name, fineturn=False, xavier=False)</span>:</span></span><br><span class="line"> n_in = x.get_shape()[<span class="number">-1</span>].value</span><br><span class="line"> <span class="keyword">with</span> tf.name_scope(name) <span class="keyword">as</span> scope:</span><br><span class="line"> <span class="keyword">if</span> fineturn:</span><br><span class="line"> weight = tf.constant(data_dict[name][<span class="number">0</span>], name=<span class="string">"weights"</span>)</span><br><span class="line"> bias = tf.constant(data_dict[name][<span class="number">1</span>], name=<span class="string">"bias"</span>)</span><br><span class="line"> <span class="comment">#print("fineturn")</span></span><br><span class="line"> <span class="keyword">elif</span> <span class="keyword">not</span> xavier:</span><br><span class="line"> weight = tf.Variable(tf.truncated_normal([n_in, n_out], stddev=<span class="number">0.01</span>), name=<span class="string">'weights'</span>)</span><br><span class="line"> bias = tf.Variable(tf.constant(<span class="number">0.1</span>, dtype=tf.float32, shape=[n_out]), </span><br><span class="line"> trainable=<span class="literal">True</span>, </span><br><span class="line"> name=<span class="string">'bias'</span>)</span><br><span class="line"> <span class="comment">#print("truncated_normal")</span></span><br><span class="line"> <span class="keyword">else</span>:</span><br><span class="line"> weight = tf.get_variable(scope+<span class="string">'weights'</span>, shape=[n_in, n_out], </span><br><span class="line"> dtype=tf.float32,</span><br><span class="line"> initializer=tf.contrib.layers.xavier_initializer_conv2d())</span><br><span class="line"> bias = tf.Variable(tf.constant(<span class="number">0.1</span>, dtype=tf.float32, shape=[n_out]), </span><br><span class="line"> trainable=<span class="literal">True</span>, </span><br><span class="line"> name=<span class="string">'bias'</span>)</span><br><span class="line"> <span class="comment">#print("xavier")</span></span><br><span class="line"> <span class="comment"># 全连接层可以使用relu_layer函数比较方便,不用像卷积层使用relu函数</span></span><br><span class="line"> activation = tf.nn.relu_layer(x, weight, bias, name=name)</span><br><span class="line"> <span class="comment">#print_layer(activation)</span></span><br><span class="line"> <span class="keyword">return</span> activation</span><br><span class="line"> </span><br><span class="line"><span class="function"><span class="keyword">def</span> <span class="title">VGG_16</span><span class="params">(images, _dropout, n_cls)</span>:</span></span><br><span class="line"> <span class="string">"""</span></span><br><span class="line"><span class="string"> 此处权重初始化方式采用的是:</span></span><br><span class="line"><span class="string"> 卷积层使用预训练模型中的参数</span></span><br><span class="line"><span class="string"> 全连接层使用xavier类型初始化</span></span><br><span class="line"><span class="string"> """</span></span><br><span class="line"> conv1_1 = conv(images, <span class="number">64</span>, <span class="string">'conv1_1'</span>, fineturn=<span class="literal">True</span>)</span><br><span class="line"> conv1_2 = conv(conv1_1, <span class="number">64</span>, <span class="string">'conv1_2'</span>, fineturn=<span class="literal">True</span>)</span><br><span class="line"> pool1 = maxpool(conv1_2, <span class="string">'pool1'</span>)</span><br><span class="line"> </span><br><span class="line"> conv2_1 = conv(pool1, <span class="number">128</span>, <span class="string">'conv2_1'</span>, fineturn=<span class="literal">True</span>)</span><br><span class="line"> conv2_2 = conv(conv2_1, <span class="number">128</span>, <span class="string">'conv2_2'</span>, fineturn=<span class="literal">True</span>)</span><br><span class="line"> pool2 = maxpool(conv2_2, <span class="string">'pool2'</span>)</span><br><span class="line"> </span><br><span class="line"> conv3_1 = conv(pool2, <span class="number">256</span>, <span class="string">'conv3_1'</span>, fineturn=<span class="literal">True</span>)</span><br><span class="line"> conv3_2 = conv(conv3_1, <span class="number">256</span>, <span class="string">'conv3_2'</span>, fineturn=<span class="literal">True</span>)</span><br><span class="line"> conv3_3 = conv(conv3_2, <span class="number">256</span>, <span class="string">'conv3_3'</span>, fineturn=<span class="literal">True</span>)</span><br><span class="line"> pool3 = maxpool(conv3_3, <span class="string">'pool3'</span>)</span><br><span class="line"> </span><br><span class="line"> conv4_1 = conv(pool3, <span class="number">512</span>, <span class="string">'conv4_1'</span>, fineturn=<span class="literal">True</span>)</span><br><span class="line"> conv4_2 = conv(conv4_1, <span class="number">512</span>, <span class="string">'conv4_2'</span>, fineturn=<span class="literal">True</span>)</span><br><span class="line"> conv4_3 = conv(conv4_2, <span class="number">512</span>, <span class="string">'conv4_3'</span>, fineturn=<span class="literal">True</span>)</span><br><span class="line"> pool4 = maxpool(conv4_3, <span class="string">'pool4'</span>)</span><br><span class="line"> </span><br><span class="line"> conv5_1 = conv(pool4, <span class="number">512</span>, <span class="string">'conv5_1'</span>, fineturn=<span class="literal">True</span>)</span><br><span class="line"> conv5_2 = conv(conv5_1, <span class="number">512</span>, <span class="string">'conv5_2'</span>, fineturn=<span class="literal">True</span>)</span><br><span class="line"> conv5_3 = conv(conv5_2, <span class="number">512</span>, <span class="string">'conv5_3'</span>, fineturn=<span class="literal">True</span>)</span><br><span class="line"> pool5 = maxpool(conv5_3, <span class="string">'pool5'</span>)</span><br><span class="line"> </span><br><span class="line"> <span class="string">'''</span></span><br><span class="line"><span class="string"> 因为训练自己的数据,全连接层最好不要使用预训练参数</span></span><br><span class="line"><span class="string"> '''</span></span><br><span class="line"> flatten = tf.reshape(pool5, [<span class="number">-1</span>, <span class="number">7</span>*<span class="number">7</span>*<span class="number">512</span>])</span><br><span class="line"> fc6 = fc(flatten, <span class="number">4096</span>, <span class="string">'fc6'</span>, xavier=<span class="literal">True</span>)</span><br><span class="line"> dropout1 = tf.nn.dropout(fc6, _dropout)</span><br><span class="line"> </span><br><span class="line"> fc7 = fc(dropout1, <span class="number">4096</span>, <span class="string">'fc7'</span>, xavier=<span class="literal">True</span>)</span><br><span class="line"> dropout2 = tf.nn.dropout(fc7, _dropout)</span><br><span class="line"> </span><br><span class="line"> fc8 = fc(dropout2, n_cls, <span class="string">'fc8'</span>, xavier=<span class="literal">True</span>)</span><br><span class="line"> </span><br><span class="line"> <span class="keyword">return</span> fc8</span><br></pre></td></tr></table></figure><p><strong>train.py</strong></p><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br><span class="line">39</span><br><span class="line">40</span><br><span class="line">41</span><br><span class="line">42</span><br><span class="line">43</span><br><span class="line">44</span><br><span class="line">45</span><br><span class="line">46</span><br><span class="line">47</span><br><span class="line">48</span><br><span class="line">49</span><br><span class="line">50</span><br><span class="line">51</span><br><span class="line">52</span><br><span class="line">53</span><br><span class="line">54</span><br><span class="line">55</span><br><span class="line">56</span><br><span class="line">57</span><br><span class="line">58</span><br><span class="line">59</span><br><span class="line">60</span><br><span class="line">61</span><br><span class="line">62</span><br><span class="line">63</span><br><span class="line">64</span><br><span class="line">65</span><br><span class="line">66</span><br><span class="line">67</span><br><span class="line">68</span><br><span class="line">69</span><br><span class="line">70</span><br><span class="line">71</span><br><span class="line">72</span><br><span class="line">73</span><br><span class="line">74</span><br><span class="line">75</span><br><span class="line">76</span><br><span class="line">77</span><br><span class="line">78</span><br><span class="line">79</span><br><span class="line">80</span><br><span class="line">81</span><br><span class="line">82</span><br><span class="line">83</span><br><span class="line">84</span><br><span class="line">85</span><br><span class="line">86</span><br><span class="line">87</span><br><span class="line">88</span><br><span class="line">89</span><br><span class="line">90</span><br></pre></td><td class="code"><pre><span class="line"><span class="comment">#coding=utf-8</span></span><br><span class="line"> </span><br><span class="line"><span class="keyword">import</span> tensorflow <span class="keyword">as</span> tf </span><br><span class="line"><span class="comment">#import numpy as np </span></span><br><span class="line"><span class="comment">#import pdb</span></span><br><span class="line"><span class="keyword">from</span> datetime <span class="keyword">import</span> datetime</span><br><span class="line"><span class="comment">#from VGG16 import *</span></span><br><span class="line"><span class="keyword">import</span> VGG16</span><br><span class="line"><span class="comment">#tf.reset_default_graph()</span></span><br><span class="line"></span><br><span class="line">batch_size = <span class="number">24</span></span><br><span class="line">lr = <span class="number">0.0001</span></span><br><span class="line">n_cls = <span class="number">29</span> <span class="comment">#训练时根据自己的类别数更改</span></span><br><span class="line">max_steps = <span class="number">15000</span></span><br><span class="line"> </span><br><span class="line"><span class="function"><span class="keyword">def</span> <span class="title">read_and_decode</span><span class="params">(filename)</span>:</span></span><br><span class="line"> <span class="comment">#根据文件名生成一个队列</span></span><br><span class="line"> filename_queue = tf.train.string_input_producer([filename])</span><br><span class="line"> </span><br><span class="line"> reader = tf.TFRecordReader()</span><br><span class="line"> _, serialized_example = reader.read(filename_queue) <span class="comment">#返回文件名和文件</span></span><br><span class="line"> features = tf.parse_single_example(serialized_example,</span><br><span class="line"> features={</span><br><span class="line"> <span class="string">'label'</span>: tf.FixedLenFeature([], tf.int64),</span><br><span class="line"> <span class="string">'img_raw'</span> : tf.FixedLenFeature([], tf.string),</span><br><span class="line"> })</span><br><span class="line"> </span><br><span class="line"> img = tf.decode_raw(features[<span class="string">'img_raw'</span>], tf.uint8)</span><br><span class="line"> img = tf.reshape(img, [<span class="number">224</span>, <span class="number">224</span>, <span class="number">3</span>])</span><br><span class="line"> <span class="comment"># 转换为float32类型,并做归一化处理</span></span><br><span class="line"> img = tf.cast(img, tf.float32)<span class="comment"># * (1. / 255)</span></span><br><span class="line"> label = tf.cast(features[<span class="string">'label'</span>], tf.int64)</span><br><span class="line"> <span class="keyword">return</span> img, label</span><br><span class="line"> </span><br><span class="line"><span class="function"><span class="keyword">def</span> <span class="title">train</span><span class="params">()</span>:</span></span><br><span class="line"> x = tf.placeholder(dtype=tf.float32, shape=[<span class="literal">None</span>, <span class="number">224</span>, <span class="number">224</span>, <span class="number">3</span>], name=<span class="string">'input'</span>)</span><br><span class="line"> y = tf.placeholder(dtype=tf.float32, shape=[<span class="literal">None</span>, n_cls], name=<span class="string">'label'</span>)</span><br><span class="line"> keep_prob = tf.placeholder(tf.float32)</span><br><span class="line"> output = VGG16.VGG_16(x, keep_prob, n_cls)</span><br><span class="line"> <span class="comment">#probs = tf.nn.softmax(output)</span></span><br><span class="line"> </span><br><span class="line"> loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=output, labels=y))</span><br><span class="line"> <span class="comment">#train_step = tf.train.AdamOptimizer(learning_rate=0.1).minimize(loss)</span></span><br><span class="line"> train_step = tf.train.GradientDescentOptimizer(learning_rate=lr).minimize(loss)</span><br><span class="line"> </span><br><span class="line"> accuracy = tf.reduce_mean(tf.cast(tf.equal(tf.argmax(output,<span class="number">1</span>), tf.argmax(y, <span class="number">1</span>)), tf.float32))</span><br><span class="line"> </span><br><span class="line"> images, labels = read_and_decode(<span class="string">'./train.tfrecords'</span>)</span><br><span class="line"> img_batch, label_batch = tf.train.shuffle_batch([images, labels],</span><br><span class="line"> batch_size=batch_size,</span><br><span class="line"> capacity=<span class="number">392</span>,</span><br><span class="line"> min_after_dequeue=<span class="number">200</span>)</span><br><span class="line"> label_batch = tf.one_hot(label_batch, n_cls, <span class="number">1</span>, <span class="number">0</span>)</span><br><span class="line"> <span class="comment"># log汇总记录</span></span><br><span class="line"> summary_op = tf.summary.merge_all()</span><br><span class="line"></span><br><span class="line"> init = tf.global_variables_initializer()</span><br><span class="line"> <span class="comment">#saver = tf.train.Saver()</span></span><br><span class="line"> saver = tf.train.Saver(max_to_keep=<span class="number">3</span>)</span><br><span class="line"> max_acc = <span class="number">0</span></span><br><span class="line"> <span class="keyword">with</span> tf.Session() <span class="keyword">as</span> sess:</span><br><span class="line"> sess.run(init)</span><br><span class="line"> train_writer = tf.summary.FileWriter(<span class="string">'./logs'</span>,sess.graph)</span><br><span class="line"> coord = tf.train.Coordinator()</span><br><span class="line"> threads = tf.train.start_queue_runners(sess=sess, coord=coord)</span><br><span class="line"> <span class="keyword">for</span> i <span class="keyword">in</span> range(max_steps):</span><br><span class="line"> batch_x, batch_y = sess.run([img_batch, label_batch])</span><br><span class="line"><span class="comment"># print batch_x, batch_x.shape</span></span><br><span class="line"><span class="comment"># print batch_y</span></span><br><span class="line"><span class="comment"># pdb.set_trace()</span></span><br><span class="line"> _, loss_val = sess.run([train_step, loss], feed_dict={x:batch_x, y:batch_y, keep_prob:<span class="number">0.8</span>})</span><br><span class="line"> <span class="keyword">if</span> i%<span class="number">10</span> == <span class="number">0</span>:</span><br><span class="line"> train_arr = accuracy.eval(feed_dict={x:batch_x, y: batch_y, keep_prob: <span class="number">1.0</span>})</span><br><span class="line"> print(<span class="string">"%s: Step [%d] Loss : %f, training accuracy : %g"</span> % (datetime.now(), i, loss_val, train_arr))</span><br><span class="line"> <span class="comment"># 只指定了训练结束后保存模型,可以修改为每迭代多少次后保存模型</span></span><br><span class="line"> <span class="comment">#if (i + 1) == max_steps:</span></span><br><span class="line"> <span class="comment">#checkpoint_path = os.path.join(FLAGS.train_dir, './model/model.ckpt')</span></span><br><span class="line"> <span class="comment">#saver.save(sess, './model/model.ckpt', global_step=i)</span></span><br><span class="line"> summary_str = sess.run(summary_op)</span><br><span class="line"> train_writer.add_summary(summary_str,i)</span><br><span class="line"> <span class="comment"># 保存最近的3个模型参数</span></span><br><span class="line"> <span class="keyword">if</span> train_arr > max_acc:</span><br><span class="line"> saver.save(sess, <span class="string">'./model/model.ckpt'</span>, global_step=i + <span class="number">1</span>)</span><br><span class="line"> coord.request_stop()</span><br><span class="line"> coord.join(threads)</span><br><span class="line"> <span class="comment">#saver.save(sess, 'model/model.ckpt')</span></span><br><span class="line"></span><br><span class="line"> </span><br><span class="line"><span class="keyword">if</span> __name__ == <span class="string">'__main__'</span>:</span><br><span class="line"> train()</span><br></pre></td></tr></table></figure><p>训练开始后,出现如下状态:<br><img src="https://img-blog.csdnimg.cn/20190816085014243.png" alt="在这里插入图片描述"></p><h2 id="3-测试评估"><a href="#3-测试评估" class="headerlink" title="3.测试评估"></a>3.测试评估</h2><p><strong>test.py</strong></p><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br><span class="line">39</span><br><span class="line">40</span><br><span class="line">41</span><br><span class="line">42</span><br><span class="line">43</span><br><span class="line">44</span><br><span class="line">45</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">import</span> tensorflow <span class="keyword">as</span> tf </span><br><span class="line"><span class="keyword">import</span> numpy <span class="keyword">as</span> np </span><br><span class="line"><span class="keyword">import</span> pdb</span><br><span class="line"><span class="keyword">from</span> datetime <span class="keyword">import</span> datetime</span><br><span class="line"><span class="keyword">from</span> VGG16 <span class="keyword">import</span> *</span><br><span class="line"><span class="keyword">import</span> cv2</span><br><span class="line"><span class="keyword">import</span> os</span><br><span class="line"><span class="comment">#import matplotlib.pyplot as plt</span></span><br><span class="line"> </span><br><span class="line"><span class="function"><span class="keyword">def</span> <span class="title">test</span><span class="params">(path)</span>:</span></span><br><span class="line"> </span><br><span class="line"> x = tf.placeholder(dtype=tf.float32, shape=[<span class="literal">None</span>, <span class="number">224</span>, <span class="number">224</span>, <span class="number">3</span>], name=<span class="string">'input'</span>)</span><br><span class="line"> keep_prob = tf.placeholder(tf.float32)</span><br><span class="line"> <span class="comment"># 注意更改自己的类别数,此处输出为29类</span></span><br><span class="line"> output = VGG16(x, keep_prob, <span class="number">29</span>)</span><br><span class="line"> score = tf.nn.softmax(output)</span><br><span class="line"><span class="comment"># 返回每一行最大置信度所在的索引数组</span></span><br><span class="line"> f_cls = tf.argmax(score, <span class="number">1</span>)</span><br><span class="line"></span><br><span class="line"> sess = tf.InteractiveSession()</span><br><span class="line"> sess.run(tf.global_variables_initializer())</span><br><span class="line"> saver = tf.train.Saver()</span><br><span class="line"> <span class="comment"># 训练好的模型位置</span></span><br><span class="line"> saver.restore(sess, <span class="string">'./model/model.ckpt-3000'</span>)</span><br><span class="line"> <span class="keyword">for</span> i <span class="keyword">in</span> os.listdir(path):</span><br><span class="line"> imgpath = os.path.join(path, i)</span><br><span class="line"> im = cv2.imread(imgpath)</span><br><span class="line"> im = cv2.resize(im, (<span class="number">224</span> , <span class="number">224</span>))<span class="comment"># * (1. / 255)</span></span><br><span class="line"> </span><br><span class="line"> im = np.expand_dims(im, axis=<span class="number">0</span>)</span><br><span class="line"> <span class="comment"># 测试时,keep_prob设置为1.0</span></span><br><span class="line"> pred, _score = sess.run([f_cls, score], feed_dict={x:im, keep_prob:<span class="number">1.0</span>})</span><br><span class="line"> prob = round(np.max(_score), <span class="number">4</span>)</span><br><span class="line"><span class="comment"># 打印测试图片所属类别的索引号和置信度</span></span><br><span class="line"> print(<span class="string">"{} rubbing class is: {}, score: {}"</span>.format(i, int(pred), prob))</span><br><span class="line"> <span class="comment"># plt.imshow(im)</span></span><br><span class="line"> <span class="comment"># plt.imshow(im1)</span></span><br><span class="line"> <span class="comment"># plt.title(u'预测值:%i' % pred)</span></span><br><span class="line"> <span class="comment"># plt.show() </span></span><br><span class="line"> sess.close()</span><br><span class="line"> </span><br><span class="line"><span class="keyword">if</span> __name__ == <span class="string">'__main__'</span>:</span><br><span class="line"> <span class="comment"># 测试图片保存在文件夹中了,图片前面数字为所属类别</span></span><br><span class="line"> path = <span class="string">'./test_img'</span></span><br><span class="line"> test(path)</span><br></pre></td></tr></table></figure><p>以下是测试效果:<br><img src="https://img-blog.csdnimg.cn/20190816102311170.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3dhbmdkczAwMA==,size_16,color_FFFFFF,t_70" alt=""><br>左边是测试图片,右边是测试结果。由于数据集规模小且训练时间段短,最终测试的效果不是很好,有个别字符的分类识别效果较差。</p>]]></content>
<summary type="html">
<h1 id="具体步骤"><a href="#具体步骤" class="headerlink" title="具体步骤"></a>具体步骤</h1><h2 id="1-数据处理"><a href="#1-数据处理" class="headerlink" title="1.数据处
</summary>
</entry>
<entry>
<title>Hello World</title>
<link href="http://yoursite.com/2020/03/22/hello-world/"/>
<id>http://yoursite.com/2020/03/22/hello-world/</id>
<published>2020-03-22T07:51:35.986Z</published>
<updated>2020-03-22T07:51:35.986Z</updated>
<content type="html"><![CDATA[<p>Welcome to <a href="https://hexo.io/" target="_blank" rel="noopener">Hexo</a>! This is your very first post. Check <a href="https://hexo.io/docs/" target="_blank" rel="noopener">documentation</a> for more info. If you get any problems when using Hexo, you can find the answer in <a href="https://hexo.io/docs/troubleshooting.html" target="_blank" rel="noopener">troubleshooting</a> or you can ask me on <a href="https://github.com/hexojs/hexo/issues" target="_blank" rel="noopener">GitHub</a>.</p><h2 id="Quick-Start"><a href="#Quick-Start" class="headerlink" title="Quick Start"></a>Quick Start</h2><h3 id="Create-a-new-post"><a href="#Create-a-new-post" class="headerlink" title="Create a new post"></a>Create a new post</h3><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">$ hexo new <span class="string">"My New Post"</span></span><br></pre></td></tr></table></figure><p>More info: <a href="https://hexo.io/docs/writing.html" target="_blank" rel="noopener">Writing</a></p><h3 id="Run-server"><a href="#Run-server" class="headerlink" title="Run server"></a>Run server</h3><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">$ hexo server</span><br></pre></td></tr></table></figure><p>More info: <a href="https://hexo.io/docs/server.html" target="_blank" rel="noopener">Server</a></p><h3 id="Generate-static-files"><a href="#Generate-static-files" class="headerlink" title="Generate static files"></a>Generate static files</h3><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">$ hexo generate</span><br></pre></td></tr></table></figure><p>More info: <a href="https://hexo.io/docs/generating.html" target="_blank" rel="noopener">Generating</a></p><h3 id="Deploy-to-remote-sites"><a href="#Deploy-to-remote-sites" class="headerlink" title="Deploy to remote sites"></a>Deploy to remote sites</h3><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">$ hexo deploy</span><br></pre></td></tr></table></figure><p>More info: <a href="https://hexo.io/docs/one-command-deployment.html" target="_blank" rel="noopener">Deployment</a></p>]]></content>
<summary type="html">
<p>Welcome to <a href="https://hexo.io/" target="_blank" rel="noopener">Hexo</a>! This is your very first post. Check <a href="https://hexo.
</summary>
</entry>
</feed>