Contradictory samples? Decremental learning? #1216
Unanswered
FedericoMz
asked this question in
Q&A
Replies: 1 comment 2 replies
-
Hey there 👋 There is no explicit way to "unlearn" a sample. It would be too complicated. The problem you describe is connected to the notion of concept drift. As the data stream moves on, the distribution of the data changes. That's one of the advantages online learning offers: it adapts. The more data you show the model, the more it will forget about past data. This happens naturally with any online learning algorithm. The pace at which the model forgets depends on the model's and it's parameters. For instance, increasing the learning rate in a linear model increases the pace. Hope this helps. |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Let's say in my data stream, there's a sample (x, y1). Later, there's (x, y2). Same sample as before, but with a different label. We can assume that the later sample has the "correct" label, for simplicity.
How can I deal with this case with River? Looking at the documentation, I don't think it is possible to "detrain" a model, as decremental learning is not supported. Is there any alternative? Perhaps training the model twice on (x, y2) to reduce (x, y1) importance?
Beta Was this translation helpful? Give feedback.
All reactions