Skip to content

Commit

Permalink
Bug fix.
Browse files Browse the repository at this point in the history
Remove wrong use of Roboto font which cause compile error if no Roboto font found in system.
Fix issue #68: "Regularization" --> "正则化".
  • Loading branch information
zhanggyb committed Jun 3, 2017
1 parent 73c0d04 commit 109cd93
Show file tree
Hide file tree
Showing 7 changed files with 12 additions and 12 deletions.
8 changes: 4 additions & 4 deletions chap3.tex
Original file line number Diff line number Diff line change
Expand Up @@ -667,7 +667,7 @@ \subsection*{问题}
使用这个表达式,我们可以在使用柔性最大值层和对数似然代价的网络上应用\gls*{bp}。
\end{itemize}

\section{过度拟合和规范化}
\section{过度拟合和正则化}
\label{sec:overfitting_and_regularization}

诺贝尔奖获得者,物理学家恩里科·费米有一次被问到他对一些同僚提出的一个数学模型的
Expand Down Expand Up @@ -843,7 +843,7 @@ \section{过度拟合和规范化}
\gls*{overfitting}。不幸的是,训练数据其实是很难或者很昂贵的资源,所以这不是一种太切实际的
选择。

\subsection{规范化}
\subsection{正则化}

增加训练样本的数量是一种减轻\gls*{overfitting}的方法。还有其他的方法能够减轻\gls*{overfitting}的程度
吗?一种可行的方式就是降低网络的规模。然而,大的网络拥有一种比小网络更强的潜力,
Expand Down Expand Up @@ -1031,7 +1031,7 @@ \subsection{规范化}
仅会引起在那个方向发生微小的变化。我相信这个现象让我们的学习算法更难有效地探索权
重空间,最终导致很难找到\gls*{cost-func}的最优值。

\subsection{为何规范化可以帮助减轻过度拟合}
\subsection{为何正则化可以帮助减轻过度拟合}

% I produce the 3 graphs in MATLAB:
%
Expand Down Expand Up @@ -1205,7 +1205,7 @@ \subsection{为何规范化可以帮助减轻过度拟合}
络更加灵活~——~因为,大的\gls*{bias}让神经元更加容易饱和,这有时候是我们所要达到的效果。所
以,我们通常不会对\gls*{bias}进行\gls*{regularization}。

\subsection{规范化的其他技术}
\subsection{正则化的其他技术}

除了 L2 外还有很多\gls*{regularization}技术。实际上,正是由于数量众多,我这里也不会将所有的都列
举出来。在本节,我简要地给出三种减轻\gls*{overfitting}的其他的方法:L1 \gls*{regularization}、\gls*{dropout}和人为
Expand Down
2 changes: 1 addition & 1 deletion chap5.tex
Original file line number Diff line number Diff line change
Expand Up @@ -128,7 +128,7 @@ \section{消失的梯度问题}
('0', '1', '2', ..., 9)。

让我们训练 30 个完整的\gls*{epoch},使用\gls*{mini-batch}大小为 10, 学习率 $\eta = 0.1$
规范化参数 $\lambda = 5.0$。在训练时,我们也会在 \lstinline!validation_data! 上监
\gls*{regularization}参数 $\lambda = 5.0$。在训练时,我们也会在 \lstinline!validation_data! 上监
控分类的准确度\footnote{注意网络可能需要花费几分钟来训练,要看你机器的速度。所以
如果你正在运行代码,你可能愿意继续阅读并稍后回来,而不是等待代码完成执行。}:
\begin{lstlisting}[language=Python]
Expand Down
4 changes: 2 additions & 2 deletions glossaries.tex
Original file line number Diff line number Diff line change
Expand Up @@ -118,7 +118,7 @@
}

\newglossaryentry{regularization}{
name={规范化},
name={正则化},
description={\emph{Regularization}}
}

Expand All @@ -128,7 +128,7 @@
}

\newglossaryentry{regularization-term}{
name={规范化项},
name={正则化项},
description={\emph{Regularization Term}}
}

Expand Down
2 changes: 1 addition & 1 deletion images/basic_manipulation.tex
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,6 @@

\begin{document}
\manipulateSingleHiddenNeuron{8}{-4}{
\RobotoLight 顶部隐藏神经元的输出
顶部隐藏神经元的输出
}
\end{document}
2 changes: 1 addition & 1 deletion images/circuit_multiplication.tex
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@
\input{../westernfonts}

\begin{document}
\begin{tikzpicture}[font={\RobotoLight}]
\begin{tikzpicture}

\foreach \x in {0,...,26}
\coordinate (p\x) at (\x * 0.4,0);
Expand Down
4 changes: 2 additions & 2 deletions images/initial_gradient.tex
Original file line number Diff line number Diff line change
Expand Up @@ -18,8 +18,8 @@
\node(r\x) [neuron] at (7, 2.2 * \x) {};
}

\node [above] at (l5.north) {\RobotoLight hidden layer 1};
\node [above] at (r5.north) {\RobotoLight hidden layer 2};
\node [above] at (l5.north) {hidden layer 1};
\node [above] at (r5.north) {hidden layer 2};

\foreach \x in {0,...,5}
\foreach \y in {0,...,5}
Expand Down
2 changes: 1 addition & 1 deletion images/shallow_circuit.tex
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@
\input{../westernfonts}

\begin{document}
\begin{tikzpicture}[font={\RobotoLight}]
\begin{tikzpicture}

\foreach \x in {0,...,26}
\draw (\x * 0.4,0) -- (\x * 0.4, 1);
Expand Down

0 comments on commit 109cd93

Please sign in to comment.