Releases: tidymodels/tune
tune 1.2.1
-
Addressed issue in
int_pctl()
where the function would error when parallelized usingmakePSOCKcluster()
(#885). -
Addressed issue where tuning functions would raise the error
object 'iteration' not found
withplan(multisession)
and the control optionparallel_over = "everything"
(#888).
Note that this GitHub release is a re-release with the tag v1.2.1, corresponding to the package version that arrived on CRAN on 2024-04-18. The previous GitHub release with this tag did not match the version released to CRAN. Closes #897.
tune 1.2.0
New Features
-
tune now fully supports models in the "censored regression" mode. These models can be fit, tuned, and evaluated like the regression and classification modes. tidymodels.org has more information and tutorials on how to work with survival analysis models.
-
Introduced support for parallel processing using the future framework. The tune package previously supported parallelism with foreach, and users can use either framework for now. In a future release, tune will begin the deprecation cycle for parallelism with foreach, so we encourage users to begin migrating their code now. See the Parallel Processing section in the "Optimizations" article to learn more (#866).
-
Added a
type
argument tocollect_metrics()
to indicate the desired output format. The default,type = "long"
, returns output as before, whiletype = "wide"
pivots the output such that each metric has its own column (#839). -
Added a new function,
compute_metrics()
, that allows for computing new metrics after evaluating against resamples. The arguments and output formats are closely related to those fromcollect_metrics()
, but this function requires that the input be generated with the control optionsave_pred = TRUE
and additionally takes ametrics
argument with a metric set for new metrics to compute. This allows for computing new performance metrics without requiring users to re-fit and re-predict from each model (#663). -
A method for rsample's
int_pctl()
function that will compute percentile confidence intervals on performance metrics for objects produced byfit_resamples()
,tune_*()
, andlast_fit()
. -
The Brier score is now part of the default metric set for classification models.
Bug Fixes
-
last_fit()
will now error when supplied a fitted workflow (#678). -
Fixes bug where
.notes
entries were sorted in the wrong order in tuning results for resampling schemes with IDs that aren't already in alphabetical order (#728). -
Fixes bug where
.config
entries in the.extracts
column intune_bayes()
output didn't align with the entries they ought to in the.metrics
and.predictions
columns (#715). -
Metrics from apparent resamples are no longer included when estimating performance with
estimate_tune_results()
(and thus withcollect_metrics(..., summarize = TRUE)
andcompute_metrics(..., summarize = TRUE)
, #714). -
Handles edge cases for
tune_bayes()
'iter
argument more soundly. Foriter = 0
, the output oftune_bayes()
should matchtune_grid()
, andtune_bayes()
will now error wheniter < 0
.tune_bayes()
will now alter the state of RNG slightly differently, resulting in changed Bayesian optimization search output (#720). -
augment()
methods totune_results
,resample_results
, andlast_fit
objects now always return tibbles (#759).
Other Changes
-
Improved error message when needed packages aren't installed (#727).
-
augment()
methods totune_results
,resample_results
, andlast_fit
objects now always returns tibbles (#759). -
Improves documentation related to the hyperparameters associated with extracted objects that are generated from submodels. See the "Extracting with submodels" section of
?collect_extracts
to learn more. -
eval_time
andeval_time_target
attribute was added to tune objects. There are also.get_tune_eval_times()
and.get_tune_eval_time_target()
functions. -
collect_predictions()
now reorders the columns so that all prediction columns come first (#798). -
augment()
methods totune_results
,resample_results
, andlast_fit
objects now return prediction results in the first columns (#761). -
autoplot()
will now meaningfully error if only 1 grid point is present, rather than producing a plot (#775). -
Added notes on case weight usage to several functions (#805).
-
For iterative optimization routines,
autoplot()
will use integer breaks whentype = "performance"
ortype = "parameters"
.
Breaking Changes
-
Several functions gained an
eval_time
argument for the evaluation time of dynamic metrics for censored regression. The placement of the argument breaks passing-by-position for one or more other arguments toautoplot.tune_results()
and the developer-focusedcheck_initial()
(#857). -
Ellipses (...) are now used consistently in the package to require optional arguments to be named. For functions that previously had ellipses at the end of the function signature, they have been moved to follow the last argument without a default value: this applies to
augment.tune_results()
,collect_predictions.tune_results()
,collect_metrics.tune_results()
,select_best.tune_results()
,show_best.tune_results()
, and the developer-focusedestimate_tune_results()
,load_pkgs()
, andencode_set()
. Several other functions that previously did not have ellipses in their signatures gained them: this applies toconf_mat_resampled()
and the developer-focusedcheck_workflow()
. Optional arguments previously passed by position will now error informatively prompting them to be named. These changes don't apply in cases when the ellipses are currently in use to forward arguments to other functions (#863).
tune 1.1.2
-
last_fit()
now works with the 3-way validation split objects fromrsample::initial_validation_split()
.last_fit()
andfit_best()
now have a new argumentadd_validation_set
to include or exclude the validation set in the dataset used to fit the model (#701). -
Disambiguates the
verbose
andverbose_iter
control options to better align with documented functionality. The former controls logging for general progress updates, while the latter only does so for the Bayesian search process. (#682)
tune 1.1.1
- Fixed a bug introduced in tune 1.1.0 in
collect_()
functions where the
.iter
column was dropped.
tune 1.1.0
tune 1.1.0 introduces a number of new features and bug fixes, accompanied by various optimizations that substantially decrease the total evaluation time to tune hyperparameters in the tidymodels.
New features
-
Introduced a new function
fit_best()
that provides a shorthand interface to fit a final model after parameter tuning. (#586) -
Refined machinery for logging issues during tuning. Rather than printing out warnings and errors as they appear, the package will now only print unique tuning issues, updating a dynamic summary message that maintains counts of each unique issue. This feature is only enabled for tuning sequentially and can be manually toggled with the
verbose
option. (#588) -
Introduced
collect_extracts()
, a function for collecting extracted objects from tuning results. The format of results closely mirrorscollect_notes()
, where the extracted objects are contained in a list-column alongside the resample ID and workflow.config
. (#579)
Bug fixes
-
Fixed bug in
select_by_pct_loss()
where the model with the greatest loss within the limit was returned rather than the most simple model whose loss was within the limit. (#543) -
Fixed bug in
tune_bayes()
where.Last.tune.result
would return intermediate tuning results. (#613) -
Extended
show_best()
,select_best()
,select_by_one_std_error()
,select_by_pct_loss()
to accommodate metrics with a target value of zero (notably,yardstick::mpe()
andyardstick::msd()
). (#243)
Other changes
-
Implemented various optimizations in tune's backend that substantially decrease the total evaluation time to tune hyperparameters with the tidymodels. (#634, #635, #636, #637, #640, #641, #642, #648, #649, #653, #656, #657)
-
Allowed users to supply list-columns in
grid
arguments. This change allows for manually specifying grid values that must be contained in list-columns, like functions or lists. (#625) -
Clarified error messages in
select_by_*
functions. Error messages now only note entries in...
that are likely candidates for failure toarrange()
, and those error messages are no longer duplicated for each entry in...
. -
Improved condition handling for errors that occur during extraction from workflows. While messages and warnings were appropriately handled, errors occurring due to misspecified
extract()
functions being supplied tocontrol_*()
functions were silently caught. As with warnings, errors are now surfaced both during execution and atprint()
(#575). -
Moved forward with the deprecation of
parameters()
methods forworkflow
s,model_spec
s, andrecipes
. Each of these methods will now warn on every usage and will be defunct in a later release of the package. (#650) -
Various bug fixes and improvements to documentation.
tune 1.0.1
-
last_fit()
,fit_resamples()
,tune_grid()
, andtune_bayes()
do not automatically error if the wrong type ofcontrol
object is passed. If the passed control object is not a superset of the one that is needed, the function will still error. As an example, passingcontrol_grid()
totune_bayes()
will fail but passingcontrol_bayes()
totune_grid()
will not. (#449) -
The
collect_metrics()
method for racing objects was removed (and is now in the finetune package). -
Improved prompts related to parameter tuning. When tuning parameters are supplied that are not compatible with the given engine,
tune_*()
functions will now error. (#549) -
The
control_bayes()
got a new argumentverbose_iter
that is used to control the verbosity of the Bayesian calculations. This change means that theverbose
argument is being passed totune_grid()
to control its verbosity. -
The
control_last_fit()
function gained an argumentallow_par
that defaults toFALSE
. This change addresses failures afterlast_fit()
using modeling engines that require native serialization, and we anticipate little to no increase in time-to-fit resulting from this change. (#539, tidymodels/bonsai#52) -
show_notes()
does a better jobs of... showing notes. (#558)
tune 1.0.0
-
show_notes()
is a new function that can better help understand warnings and errors. -
Logging that occurs using the tuning and resampling functions now show multi-line error messages and warnings in multiple lines.
-
When
fit_resamples()
,last_fit()
,tune_grid()
, ortune_bayes()
complete without error (even if models fail), the results are also available via.Last.tune.result
. -
last_fit()
now accepts acontrol
argument to allow users to control aspects of the last fitting process viacontrol_last_fit()
(#399). -
Case weights are enabled for models that can use them.
-
Some internal functions were exported for use by other packages.
-
A check was added to
fit_resamples()
andlast_fit()
to give a more informative error message when a preprocessor or model have parameters marked for tuning. -
outcome_names()
works correctly when recipe has NA roles. (#518)
tune 0.2.0
-
The
.notes
column now contains information on the type of note (error or warning), the location where it occurred, and the note. Printing a tune result has different output describing the notes. -
collect_notes()
can be used to gather any notes to a tibble. (#363) -
Parallel processing with PSOCK clusters is now more efficient, due to carefully avoiding sending extraneous information to each worker (#384, #396).
-
The engine arguments for xgboost
alpha
,lambda
, andscale_pos_weight
are now tunable. -
When the Bayesian optimization data contain missing values, these are removed before fitting the GP model. If all metrics are missing, no GP is fit and the current results are returned. (#432)
-
Moved
tune()
from tune to hardhat (#442). -
The
parameters()
methods forrecipe
,model_spec
, andworkflow
objects have been soft-deprecated in favor ofextract_parameter_set_dials()
methods (#428).
tune 0.1.6
-
When using
load_pkgs()
, packages that use random numbers on start-up do not affect the state of the RNG. We also added more control of the RNGkind to make it consistent with the user's previous value (#389). -
New
extract_*()
functions have been added that supersede many of the the existingpull_*()
functions. This is part of a larger move across the tidymodels packages towards a family of genericextract_*()
functions. Manypull_*()
functions have been soft-deprecated, and will eventually be removed. (#378)
tune 0.1.5
- Fixed a bug where the resampled confusion matrix is transposed when
conf_mat_resamped(tidy = FALSE)
(#372)