diff --git a/R-package/R/lgb.Booster.R b/R-package/R/lgb.Booster.R index 95b854110d2d..949038fde622 100644 --- a/R-package/R/lgb.Booster.R +++ b/R-package/R/lgb.Booster.R @@ -843,6 +843,9 @@ Booster <- R6::R6Class( #' passing the prediction type through \code{params} instead of through this argument might #' result in factor levels for classification objectives not being applied correctly to the #' resulting output. +#' +#' \emph{New in version 4.0.0} +#' #' @param start_iteration int or None, optional (default=None) #' Start index of the iteration to predict. #' If None or <= 0, starts from the first iteration. @@ -861,6 +864,9 @@ NULL #' @name predict.lgb.Booster #' @title Predict method for LightGBM model #' @description Predicted values based on class \code{lgb.Booster} +#' +#' \emph{New in version 4.0.0} +#' #' @details If the model object has been configured for fast single-row predictions through #' \link{lgb.configure_fast_predict}, this function will use the prediction parameters #' that were configured for it - as such, extra prediction parameters should not be passed @@ -878,6 +884,9 @@ NULL #' If single-row predictions are going to be performed frequently, it is recommended to #' pre-configure the model object for fast single-row sparse predictions through function #' \link{lgb.configure_fast_predict}. +#' +#' \emph{Changed from 'data', in version 4.0.0} +#' #' @param header only used for prediction for text file. True if text file has header #' @param ... ignored #' @return For prediction types that are meant to always return one output per observation (e.g. when predicting @@ -1137,6 +1146,9 @@ lgb.configure_fast_predict <- function(model, #' @name print.lgb.Booster #' @title Print method for LightGBM model #' @description Show summary information about a LightGBM model object (same as \code{summary}). +#' +#' \emph{New in version 4.0.0} +#' #' @param x Object of class \code{lgb.Booster} #' @param ... Not used #' @return The same input \code{x}, returned as invisible. @@ -1186,6 +1198,9 @@ print.lgb.Booster <- function(x, ...) { #' @name summary.lgb.Booster #' @title Summary method for LightGBM model #' @description Show summary information about a LightGBM model object (same as \code{print}). +#' +#' \emph{New in version 4.0.0} +#' #' @param object Object of class \code{lgb.Booster} #' @param ... Not used #' @return The same input \code{object}, returned as invisible. diff --git a/R-package/R/lgb.drop_serialized.R b/R-package/R/lgb.drop_serialized.R index 1e1157ff997f..bcc2480e8ccc 100644 --- a/R-package/R/lgb.drop_serialized.R +++ b/R-package/R/lgb.drop_serialized.R @@ -4,6 +4,9 @@ #' a copy of the underlying C++ object as raw bytes, which can be used to reconstruct such object after getting #' serialized and de-serialized, but at the cost of extra memory usage. If these raw bytes are not needed anymore, #' they can be dropped through this function in order to save memory. Note that the object will be modified in-place. +#' +#' \emph{New in version 4.0.0} +#' #' @param model \code{lgb.Booster} object which was produced with `serializable=TRUE`. #' #' @return \code{lgb.Booster} (the same `model` object that was passed as input, as invisible). diff --git a/R-package/R/lgb.make_serializable.R b/R-package/R/lgb.make_serializable.R index 515341f275e1..58bdd194df4d 100644 --- a/R-package/R/lgb.make_serializable.R +++ b/R-package/R/lgb.make_serializable.R @@ -4,6 +4,9 @@ #' be serializable (e.g. cannot save and load with \code{saveRDS} and \code{readRDS}) as it will lack the raw bytes #' needed to reconstruct its underlying C++ object. This function can be used to forcibly produce those serialized #' raw bytes and make the object serializable. Note that the object will be modified in-place. +#' +#' \emph{New in version 4.0.0} +#' #' @param model \code{lgb.Booster} object which was produced with `serializable=FALSE`. #' #' @return \code{lgb.Booster} (the same `model` object that was passed as input, as invisible). diff --git a/R-package/R/lgb.restore_handle.R b/R-package/R/lgb.restore_handle.R index d9c7e2993856..dcb167608888 100644 --- a/R-package/R/lgb.restore_handle.R +++ b/R-package/R/lgb.restore_handle.R @@ -5,6 +5,8 @@ #' object is restored automatically when calling functions such as \code{predict}, but this function can be #' used to forcibly restore it beforehand. Note that the object will be modified in-place. #' +#' \emph{New in version 4.0.0} +#' #' @details Be aware that fast single-row prediction configurations are not restored through this #' function. If you wish to make fast single-row predictions using a \code{lgb.Booster} loaded this way, #' call \link{lgb.configure_fast_predict} on the loaded \code{lgb.Booster} object. diff --git a/R-package/R/lightgbm.R b/R-package/R/lightgbm.R index 51474abbe285..cb3ef31e8afa 100644 --- a/R-package/R/lightgbm.R +++ b/R-package/R/lightgbm.R @@ -84,6 +84,9 @@ #' Producing and keeping these raw bytes however uses extra memory, and if they are not required, #' it is possible to avoid producing them by passing `serializable=FALSE`. In such cases, these raw #' bytes can be added to the model on demand through function \link{lgb.make_serializable}. +#' +#' \emph{New in version 4.0.0} +#' #' @keywords internal NULL @@ -99,6 +102,9 @@ NULL #' @param label Vector of labels, used if \code{data} is not an \code{\link{lgb.Dataset}} #' @param weights Sample / observation weights for rows in the input data. If \code{NULL}, will assume that all #' observations / rows have the same importance / weight. +#' +#' \emph{Changed from 'weight', in version 4.0.0} +#' #' @param objective Optimization objective (e.g. `"regression"`, `"binary"`, etc.). #' For a list of accepted objectives, see #' \href{https://lightgbm.readthedocs.io/en/latest/Parameters.html#objective}{ @@ -112,7 +118,13 @@ NULL #' \code{label}). #' \item Otherwise, will use objective \code{"regression"}. #' } +#' +#' \emph{New in version 4.0.0} +#' #' @param init_score initial score is the base prediction lightgbm will boost from +#' +#' \emph{New in version 4.0.0} +#' #' @param num_threads Number of parallel threads to use. For best speed, this should be set to the number of #' physical cores in the CPU - in a typical x86-64 machine, this corresponds to half the #' number of maximum threads. @@ -129,6 +141,9 @@ NULL #' #' This parameter gets overriden by \code{num_threads} and its aliases under \code{params} #' if passed there. +#' +#' \emph{New in version 4.0.0} +#' #' @param ... Additional arguments passed to \code{\link{lgb.train}}. For example #' \itemize{ #' \item{\code{valids}: a list of \code{lgb.Dataset} objects, used for validation} diff --git a/R-package/man/lgb.configure_fast_predict.Rd b/R-package/man/lgb.configure_fast_predict.Rd index 35ed19c77f69..a228aad42e21 100644 --- a/R-package/man/lgb.configure_fast_predict.Rd +++ b/R-package/man/lgb.configure_fast_predict.Rd @@ -56,7 +56,9 @@ If <= 0, all iterations from start_iteration are used (no limits).} If the model was fit through function \link{lightgbm} and it was passed a factor as labels, passing the prediction type through \code{params} instead of through this argument might result in factor levels for classification objectives not being applied correctly to the - resulting output.} + resulting output. + + \emph{New in version 4.0.0}} \item{params}{a list of additional named parameters. See \href{https://lightgbm.readthedocs.io/en/latest/Parameters.html#predict-parameters}{ diff --git a/R-package/man/lgb.drop_serialized.Rd b/R-package/man/lgb.drop_serialized.Rd index 3c7d08fe9aea..ab4642b6170c 100644 --- a/R-package/man/lgb.drop_serialized.Rd +++ b/R-package/man/lgb.drop_serialized.Rd @@ -17,6 +17,8 @@ If a LightGBM model object was produced with argument `serializable=TRUE`, the R a copy of the underlying C++ object as raw bytes, which can be used to reconstruct such object after getting serialized and de-serialized, but at the cost of extra memory usage. If these raw bytes are not needed anymore, they can be dropped through this function in order to save memory. Note that the object will be modified in-place. + + \emph{New in version 4.0.0} } \seealso{ \link{lgb.restore_handle}, \link{lgb.make_serializable}. diff --git a/R-package/man/lgb.make_serializable.Rd b/R-package/man/lgb.make_serializable.Rd index 0a237c4eeb3e..476b9342cd1e 100644 --- a/R-package/man/lgb.make_serializable.Rd +++ b/R-package/man/lgb.make_serializable.Rd @@ -17,6 +17,8 @@ If a LightGBM model object was produced with argument `serializable=FALSE`, the be serializable (e.g. cannot save and load with \code{saveRDS} and \code{readRDS}) as it will lack the raw bytes needed to reconstruct its underlying C++ object. This function can be used to forcibly produce those serialized raw bytes and make the object serializable. Note that the object will be modified in-place. + + \emph{New in version 4.0.0} } \seealso{ \link{lgb.restore_handle}, \link{lgb.drop_serialized}. diff --git a/R-package/man/lgb.restore_handle.Rd b/R-package/man/lgb.restore_handle.Rd index be5bf844fdf2..bbe6f70c85de 100644 --- a/R-package/man/lgb.restore_handle.Rd +++ b/R-package/man/lgb.restore_handle.Rd @@ -18,6 +18,8 @@ After a LightGBM model object is de-serialized through functions such as \code{s \code{saveRDS}, its underlying C++ object will be blank and needs to be restored to able to use it. Such object is restored automatically when calling functions such as \code{predict}, but this function can be used to forcibly restore it beforehand. Note that the object will be modified in-place. + + \emph{New in version 4.0.0} } \details{ Be aware that fast single-row prediction configurations are not restored through this diff --git a/R-package/man/lgb_shared_params.Rd b/R-package/man/lgb_shared_params.Rd index 5b3bc6aad6e2..953bbc822848 100644 --- a/R-package/man/lgb_shared_params.Rd +++ b/R-package/man/lgb_shared_params.Rd @@ -105,6 +105,8 @@ Parameter docs shared by \code{lgb.train}, \code{lgb.cv}, and \code{lightgbm} Producing and keeping these raw bytes however uses extra memory, and if they are not required, it is possible to avoid producing them by passing `serializable=FALSE`. In such cases, these raw bytes can be added to the model on demand through function \link{lgb.make_serializable}. + + \emph{New in version 4.0.0} } \keyword{internal} diff --git a/R-package/man/lightgbm.Rd b/R-package/man/lightgbm.Rd index 52a84e86aaf6..88f3e3188fec 100644 --- a/R-package/man/lightgbm.Rd +++ b/R-package/man/lightgbm.Rd @@ -30,7 +30,9 @@ may allow you to pass other types of data like \code{matrix} and then separately \item{label}{Vector of labels, used if \code{data} is not an \code{\link{lgb.Dataset}}} \item{weights}{Sample / observation weights for rows in the input data. If \code{NULL}, will assume that all -observations / rows have the same importance / weight.} + observations / rows have the same importance / weight. + + \emph{Changed from 'weight', in version 4.0.0}} \item{params}{a list of parameters. See \href{https://lightgbm.readthedocs.io/en/latest/Parameters.html}{ the "Parameters" section of the documentation} for a list of parameters and valid values.} @@ -67,9 +69,13 @@ set to the iteration number of the best iteration.} (note that parameter \code{num_class} in this case will also be determined automatically from \code{label}). \item Otherwise, will use objective \code{"regression"}. - }} + } + + \emph{New in version 4.0.0}} -\item{init_score}{initial score is the base prediction lightgbm will boost from} +\item{init_score}{initial score is the base prediction lightgbm will boost from + + \emph{New in version 4.0.0}} \item{num_threads}{Number of parallel threads to use. For best speed, this should be set to the number of physical cores in the CPU - in a typical x86-64 machine, this corresponds to half the @@ -86,7 +92,9 @@ set to the iteration number of the best iteration.} \code{RhpcBLASctl} to be installed. This parameter gets overriden by \code{num_threads} and its aliases under \code{params} - if passed there.} + if passed there. + + \emph{New in version 4.0.0}} \item{...}{Additional arguments passed to \code{\link{lgb.train}}. For example \itemize{ diff --git a/R-package/man/predict.lgb.Booster.Rd b/R-package/man/predict.lgb.Booster.Rd index 84aefb7b555a..f8043767be43 100644 --- a/R-package/man/predict.lgb.Booster.Rd +++ b/R-package/man/predict.lgb.Booster.Rd @@ -28,7 +28,9 @@ If single-row predictions are going to be performed frequently, it is recommended to pre-configure the model object for fast single-row sparse predictions through function - \link{lgb.configure_fast_predict}.} + \link{lgb.configure_fast_predict}. + + \emph{Changed from 'data', in version 4.0.0}} \item{type}{Type of prediction to output. Allowed types are:\itemize{ \item \code{"response"}: will output the predicted score according to the objective function being @@ -54,7 +56,9 @@ If the model was fit through function \link{lightgbm} and it was passed a factor as labels, passing the prediction type through \code{params} instead of through this argument might result in factor levels for classification objectives not being applied correctly to the - resulting output.} + resulting output. + + \emph{New in version 4.0.0}} \item{start_iteration}{int or None, optional (default=None) Start index of the iteration to predict. @@ -106,6 +110,8 @@ For prediction types that are meant to always return one output per observation } \description{ Predicted values based on class \code{lgb.Booster} + + \emph{New in version 4.0.0} } \details{ If the model object has been configured for fast single-row predictions through diff --git a/R-package/man/print.lgb.Booster.Rd b/R-package/man/print.lgb.Booster.Rd index a5057751432c..27a2849556b3 100644 --- a/R-package/man/print.lgb.Booster.Rd +++ b/R-package/man/print.lgb.Booster.Rd @@ -16,4 +16,6 @@ The same input \code{x}, returned as invisible. } \description{ Show summary information about a LightGBM model object (same as \code{summary}). + + \emph{New in version 4.0.0} } diff --git a/R-package/man/summary.lgb.Booster.Rd b/R-package/man/summary.lgb.Booster.Rd index 9c2241cb2b23..bd430880186b 100644 --- a/R-package/man/summary.lgb.Booster.Rd +++ b/R-package/man/summary.lgb.Booster.Rd @@ -16,4 +16,6 @@ The same input \code{object}, returned as invisible. } \description{ Show summary information about a LightGBM model object (same as \code{print}). + + \emph{New in version 4.0.0} } diff --git a/docs/Parallel-Learning-Guide.rst b/docs/Parallel-Learning-Guide.rst index 247eba6c8193..438fd3f9ee0c 100644 --- a/docs/Parallel-Learning-Guide.rst +++ b/docs/Parallel-Learning-Guide.rst @@ -233,6 +233,8 @@ You could edit your firewall rules to allow communication between any of the wor Using Custom Objective Functions with Dask ****************************************** +.. versionadded:: 4.0.0 + It is possible to customize the boosting process by providing a custom objective function written in Python. See the Dask API's documentation for details on how to implement such functions. diff --git a/docs/Parameters.rst b/docs/Parameters.rst index aee1cc4e7f84..5eecc27889b6 100644 --- a/docs/Parameters.rst +++ b/docs/Parameters.rst @@ -145,6 +145,8 @@ Core Parameters - ``goss``, Gradient-based One-Side Sampling + - *New in 4.0.0* + - ``data`` :raw-html:`🔗︎`, default = ``""``, type = string, aliases: ``train``, ``train_data``, ``train_data_file``, ``data_filename`` - path of training data, LightGBM will train from this data @@ -670,6 +672,8 @@ Learning Control Parameters - **Note**: can be used only with ``device_type = cpu`` + - *New in version 4.0.0* + - ``num_grad_quant_bins`` :raw-html:`🔗︎`, default = ``4``, type = int - number of bins to quantization gradients and hessians @@ -678,6 +682,8 @@ Learning Control Parameters - **Note**: can be used only with ``device_type = cpu`` + - *New in 4.0.0* + - ``quant_train_renew_leaf`` :raw-html:`🔗︎`, default = ``false``, type = bool - whether to renew the leaf values with original gradients when quantized training @@ -686,10 +692,14 @@ Learning Control Parameters - **Note**: can be used only with ``device_type = cpu`` + - *New in 4.0.0* + - ``stochastic_rounding`` :raw-html:`🔗︎`, default = ``true``, type = bool - whether to use stochastic rounding in gradient quantization + - *New in 4.0.0* + IO Parameters ------------- @@ -908,6 +918,8 @@ Dataset Parameters - **Note**: ``lightgbm-transform`` is not maintained by LightGBM's maintainers. Bug reports or feature requests should go to `issues page `__ + - *New in 4.0.0* + Predict Parameters ~~~~~~~~~~~~~~~~~~ diff --git a/include/LightGBM/config.h b/include/LightGBM/config.h index 89318a7af246..e01578396259 100644 --- a/include/LightGBM/config.h +++ b/include/LightGBM/config.h @@ -166,6 +166,7 @@ struct Config { // desc = ``bagging``, Randomly Bagging Sampling // descl2 = **Note**: ``bagging`` is only effective when ``bagging_freq > 0`` and ``bagging_fraction < 1.0`` // desc = ``goss``, Gradient-based One-Side Sampling + // desc = *New in 4.0.0* std::string data_sample_strategy = "bagging"; // alias = train, train_data, train_data_file, data_filename @@ -598,22 +599,26 @@ struct Config { // desc = with quantized training, most arithmetics in the training process will be integer operations // desc = gradient quantization can accelerate training, with little accuracy drop in most cases // desc = **Note**: can be used only with ``device_type = cpu`` + // desc = *New in version 4.0.0* bool use_quantized_grad = false; // [no-save] // desc = number of bins to quantization gradients and hessians // desc = with more bins, the quantized training will be closer to full precision training // desc = **Note**: can be used only with ``device_type = cpu`` + // desc = *New in 4.0.0* int num_grad_quant_bins = 4; // [no-save] // desc = whether to renew the leaf values with original gradients when quantized training // desc = renewing is very helpful for good quantized training accuracy for ranking objectives // desc = **Note**: can be used only with ``device_type = cpu`` + // desc = *New in 4.0.0* bool quant_train_renew_leaf = false; // [no-save] // desc = whether to use stochastic rounding in gradient quantization + // desc = *New in 4.0.0* bool stochastic_rounding = true; #ifndef __NVCC__ @@ -777,6 +782,7 @@ struct Config { // desc = path to a ``.json`` file that specifies customized parser initialized configuration // desc = see `lightgbm-transform `__ for usage examples // desc = **Note**: ``lightgbm-transform`` is not maintained by LightGBM's maintainers. Bug reports or feature requests should go to `issues page `__ + // desc = *New in 4.0.0* std::string parser_config_file = ""; #ifndef __NVCC__ diff --git a/python-package/lightgbm/basic.py b/python-package/lightgbm/basic.py index 0cd69c64b240..2beee2a359c5 100644 --- a/python-package/lightgbm/basic.py +++ b/python-package/lightgbm/basic.py @@ -932,6 +932,8 @@ def predict( If True, ensure that the features used to predict match the ones used to train. Used only if data is pandas DataFrame. + .. versionadded:: 4.0.0 + Returns ------- result : numpy array, scipy.sparse or list of scipy.sparse @@ -2841,6 +2843,8 @@ def num_feature(self) -> int: def feature_num_bin(self, feature: Union[int, str]) -> int: """Get the number of bins for a feature. + .. versionadded:: 4.0.0 + Parameters ---------- feature : int or str @@ -4150,19 +4154,34 @@ def refit( will use ``leaf_output = decay_rate * old_leaf_output + (1.0 - decay_rate) * new_leaf_output`` to refit trees. reference : Dataset or None, optional (default=None) Reference for ``data``. + + .. versionadded:: 4.0.0 + weight : list, numpy 1-D array, pandas Series or None, optional (default=None) Weight for each ``data`` instance. Weights should be non-negative. + + .. versionadded:: 4.0.0 + group : list, numpy 1-D array, pandas Series or None, optional (default=None) Group/query size for ``data``. Only used in the learning-to-rank task. sum(group) = n_samples. For example, if you have a 100-document dataset with ``group = [10, 20, 40, 10, 10, 10]``, that means that you have 6 groups, where the first 10 records are in the first group, records 11-30 are in the second group, records 31-70 are in the third group, etc. + + .. versionadded:: 4.0.0 + init_score : list, list of lists (for multi-class task), numpy array, pandas Series, pandas DataFrame (for multi-class task), or None, optional (default=None) Init score for ``data``. + + .. versionadded:: 4.0.0 + feature_name : list of str, or 'auto', optional (default="auto") Feature names for ``data``. If 'auto' and data is pandas DataFrame, data columns names are used. + + .. versionadded:: 4.0.0 + categorical_feature : list of str or int, or 'auto', optional (default="auto") Categorical features for ``data``. If list of int, interpreted as indices. @@ -4173,13 +4192,25 @@ def refit( All negative values in categorical features will be treated as missing values. The output cannot be monotonically constrained with respect to a categorical feature. Floating point numbers in categorical features will be rounded towards 0. + + .. versionadded:: 4.0.0 + dataset_params : dict or None, optional (default=None) Other parameters for Dataset ``data``. + + .. versionadded:: 4.0.0 + free_raw_data : bool, optional (default=True) If True, raw data is freed after constructing inner Dataset for ``data``. + + .. versionadded:: 4.0.0 + validate_features : bool, optional (default=False) If True, ensure that the features used to refit the model match the original ones. Used only if data is pandas DataFrame. + + .. versionadded:: 4.0.0 + **kwargs Other parameters for refit. These parameters will be passed to ``predict`` method. @@ -4271,6 +4302,8 @@ def set_leaf_output( ) -> 'Booster': """Set the output of a leaf. + .. versionadded:: 4.0.0 + Parameters ---------- tree_id : int diff --git a/python-package/lightgbm/callback.py b/python-package/lightgbm/callback.py index 0c5d3e7956fa..77856f5bdab6 100644 --- a/python-package/lightgbm/callback.py +++ b/python-package/lightgbm/callback.py @@ -407,6 +407,8 @@ def early_stopping(stopping_rounds: int, first_metric_only: bool = False, verbos If float, this single value is used for all metrics. If list, its length should match the total number of metrics. + .. versionadded:: 4.0.0 + Returns ------- callback : _EarlyStoppingCallback diff --git a/python-package/lightgbm/plotting.py b/python-package/lightgbm/plotting.py index 9e84da976402..0f9bcd5f8ccb 100644 --- a/python-package/lightgbm/plotting.py +++ b/python-package/lightgbm/plotting.py @@ -656,6 +656,9 @@ def create_tree_digraph( example_case : numpy 2-D array, pandas DataFrame or None, optional (default=None) Single row with the same structure as the training data. If not None, the plot will highlight the path that sample takes through the tree. + + .. versionadded:: 4.0.0 + max_category_values : int, optional (default=10) The maximum number of category values to display in tree nodes, if the number of thresholds is greater than this value, thresholds will be collapsed and displayed on the label tooltip instead. @@ -672,6 +675,8 @@ def create_tree_digraph( graph = lgb.create_tree_digraph(clf, max_category_values=5) HTML(graph._repr_image_svg_xml()) + .. versionadded:: 4.0.0 + **kwargs Other parameters passed to ``Digraph`` constructor. Check https://graphviz.readthedocs.io/en/stable/api.html#digraph for the full list of supported parameters. @@ -792,6 +797,9 @@ def plot_tree( example_case : numpy 2-D array, pandas DataFrame or None, optional (default=None) Single row with the same structure as the training data. If not None, the plot will highlight the path that sample takes through the tree. + + .. versionadded:: 4.0.0 + **kwargs Other parameters passed to ``Digraph`` constructor. Check https://graphviz.readthedocs.io/en/stable/api.html#digraph for the full list of supported parameters. diff --git a/python-package/lightgbm/sklearn.py b/python-package/lightgbm/sklearn.py index f1afdf50724a..7e909342c01f 100644 --- a/python-package/lightgbm/sklearn.py +++ b/python-package/lightgbm/sklearn.py @@ -484,6 +484,9 @@ def __init__( threads configured for OpenMP in the system. A value of ``None`` (the default) corresponds to using the number of physical cores in the system (its correct detection requires either the ``joblib`` or the ``psutil`` util libraries to be installed). + + .. versionchanged:: 4.0.0 + importance_type : str, optional (default='split') The type of feature importance to be filled into ``feature_importances_``. If 'split', result contains numbers of times the feature is used in a model. @@ -968,6 +971,8 @@ def n_estimators_(self) -> int: This might be less than parameter ``n_estimators`` if early stopping was enabled or if boosting stopped early due to limits on complexity like ``min_gain_to_split``. + + .. versionadded:: 4.0.0 """ if not self.__sklearn_is_fitted__(): raise LGBMNotFittedError('No n_estimators found. Need to call fit beforehand.') @@ -979,6 +984,8 @@ def n_iter_(self) -> int: This might be less than parameter ``n_estimators`` if early stopping was enabled or if boosting stopped early due to limits on complexity like ``min_gain_to_split``. + + .. versionadded:: 4.0.0 """ if not self.__sklearn_is_fitted__(): raise LGBMNotFittedError('No n_iter found. Need to call fit beforehand.')