You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Yes you're correct. The (2/N) should be pulled out of the for loop to match the equation. But since this value (2/N) is a constant, the end value will be the same. So the value of your code will be the same as Siraj's code.
yes, I think the only benifit of pulling out (2/N) is making the algo a little bit faster. making devision and multiplication 1 time instead of N times
Hi,
the (2/N) factor could be pulled out of the for loop, since it's out of the sigma in the p-derivative equation, correct?
So something like this:
def step_gradient(b_current, m_current, points, learningRate): b_gradient = 0 m_gradient = 0 N = float(len(points)) for i in range(0, len(points)): x = points[i, 0] y = points[i, 1] b_gradient += -(y - ((m_current * x) + b_current)) # (2/N) outta here m_gradient += -x * (y - ((m_current * x) + b_current)) # (2/N) outta here new_b = b_current - (learningRate * ((2/N)*b_gradient)) # (2/N) to be used here new_m = m_current - (learningRate * ((2/N)*m_gradient)) # (2/N) to be used here return [new_b, new_m]
The text was updated successfully, but these errors were encountered: