We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
各位开发人员好: 最近我回头看反向传播的过程. 在全连接神经网络的反向传播的实现代码部分 (class NetWork): def calc_gradient(self, label): delta = self.layers[-1].activator.backward(self.layers[-1].output) * (label - self.layers[-1].output) for layer in self.layers[::-1]: layer.backward(delta) delta = layer.delta return delta. 结合文章, 我没有看明白求解梯度的这个第一行代码是在计算什么. 按照反向传播的起点来算, 应该是从损失函数开始算起, 但是这个求解梯度的方法并没有从损失值还是计算, 而是从输出层的神经元的激活结果开始算起, 这样子是正确的么? 如果是, 还请帮我解释一下, 万分感谢各位的帮助.
def calc_gradient(self, label): delta = self.layers[-1].activator.backward(self.layers[-1].output) * (label - self.layers[-1].output) for layer in self.layers[::-1]: layer.backward(delta) delta = layer.delta return delta
The text was updated successfully, but these errors were encountered:
我没有明白上述方法中的第一行在反向传播过程中为什么会是 self.layers[-1].activator.backward(self.layers[-1].output) * (label - self.layers[-1].output) 括号中这样的数据计算
self.layers[-1].activator.backward(self.layers[-1].output) * (label - self.layers[-1].output)
Sorry, something went wrong.
y*(1-y)(t-y) activator.backward(self.layers[-1].output) = y*(1-y)
No branches or pull requests
各位开发人员好: 最近我回头看反向传播的过程. 在全连接神经网络的反向传播的实现代码部分 (class NetWork):
def calc_gradient(self, label): delta = self.layers[-1].activator.backward(self.layers[-1].output) * (label - self.layers[-1].output) for layer in self.layers[::-1]: layer.backward(delta) delta = layer.delta return delta
.结合文章, 我没有看明白求解梯度的这个第一行代码是在计算什么. 按照反向传播的起点来算, 应该是从损失函数开始算起, 但是这个求解梯度的方法并没有从损失值还是计算, 而是从输出层的神经元的激活结果开始算起, 这样子是正确的么? 如果是, 还请帮我解释一下, 万分感谢各位的帮助.
The text was updated successfully, but these errors were encountered: