-
Notifications
You must be signed in to change notification settings - Fork 3.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[python-package] fix retrain on sequence dataset #6414
base: master
Are you sure you want to change the base?
Changes from all commits
2b7811b
a07800c
3ac186c
ecd5746
b3bcf37
48f062c
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change | ||||||||
---|---|---|---|---|---|---|---|---|---|---|
|
@@ -217,6 +217,43 @@ def test_sequence_get_data(num_seq): | |||||||||
np.testing.assert_array_equal(subset_data.get_data(), X[sorted(used_indices)]) | ||||||||||
|
||||||||||
|
||||||||||
def test_retrain_list_of_sequence(): | ||||||||||
X, y = load_breast_cancer(return_X_y=True) | ||||||||||
seqs = _create_sequence_from_ndarray(X, 2, 100) | ||||||||||
|
||||||||||
seq_ds = lgb.Dataset(seqs, label=y, free_raw_data=False) | ||||||||||
|
||||||||||
assert sum([len(s) for s in seq_ds.get_data()]) == X.shape[0] | ||||||||||
assert len(seq_ds.get_feature_name()) == X.shape[1] | ||||||||||
assert seq_ds.get_data() == seqs | ||||||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. These checks should be moved after training, to avoid this test failure: https://github.com/microsoft/LightGBM/actions/runs/9935170010/job/27451324230?pr=6414
Please run the tests yourself before pushing another commit. sh build-python.sh bdist_wheel install
pytest tests/python_package_tests/test_basic.py::test_retrain_list_of_sequence There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Oh, I only tested the code on jupyter notebook for this case. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Ok, I will push testing changes for you. |
||||||||||
|
||||||||||
params = { | ||||||||||
"objective": "binary", | ||||||||||
"num_boost_round": 20, | ||||||||||
"min_data": 10, | ||||||||||
"num_leaves": 10, | ||||||||||
"verbose": -1, | ||||||||||
} | ||||||||||
|
||||||||||
model1 = lgb.train( | ||||||||||
params, | ||||||||||
seq_ds, | ||||||||||
keep_training_booster=True, | ||||||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Suggested change
Using I expect it will be more common to instead want to continue training with a model loaded from a file + a Sequence object in memory. Could you please modify this test to not use There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Because I have a rolling timeseries trainning project.
Since it is in the loop, no necessary to dump model as a file, I just reuse it . There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Thank you for explaining that. Very interesting use of But the fact that you want to use this functionality in one specific way (with the model held in memory the entire time) does not mean that that's the only pattern that should be tested. It's very common to use LightGBM's training continuation functionality starting from a model file... for example, to update an existing model once a month based on newly-arrived data. It's important that all LightGBM training-continuation codepaths support that pattern. Anyway, like I mentioned in #6414 (comment), I can push testing changes here. Once you see the diff of the changes I push, I'd be happy to answer any questions you have. |
||||||||||
) | ||||||||||
|
||||||||||
assert model1.current_iteration() == 20 | ||||||||||
assert model1.num_trees() == 20 | ||||||||||
|
||||||||||
model2 = lgb.train( | ||||||||||
params, | ||||||||||
seq_ds, | ||||||||||
init_model=model1, | ||||||||||
) | ||||||||||
|
||||||||||
assert model2.current_iteration() == 20 | ||||||||||
assert model2.num_trees() == 20 | ||||||||||
Comment on lines
+253
to
+254
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Suggested change
These don't look correct. Performing training once with |
||||||||||
|
||||||||||
|
||||||||||
def test_chunked_dataset(): | ||||||||||
X_train, X_test, y_train, y_test = train_test_split( | ||||||||||
*load_breast_cancer(return_X_y=True), test_size=0.1, random_state=2 | ||||||||||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why was
free_raw_data=False
necessary here? If it wasn't, please remove it.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If free_raw_data=True, model2 cannot get the data, would raise Exception I remember .