Replies: 1 comment
-
On Wed, Aug 31, 2022 at 9:36 AM Dean Shabi ***@***.***> wrote:
Hello!
We are facing some issue trying to use LMFIT for inference.
Our training pipeline uses ExpressionModel, fitting it, then saving the train
params as well as the confidence in a database.
In our inference pipeline, we would like to initialize the same LMFIT
model with the parameters found during training, similarly to loading from
dumped model.
Something of this sort:
train_params = load_from_db()
model = ExpressionModel(...)
model.params = train_params
model.eval(x = ...)
However, we cannot run eval without running fit first, or without dumping
the trained model with dill.
This sounds to us like a common use case, is there any way how to do it?
pass your `params` to `Model.eval()`:
import numpy as np
from lmfit import Parameters
from lmfit.models import ExpressionModel
params = Parameters()
params.add_many(('off', 1.0), ('slope', 0.220))
model = ExpressionModel('off + slope*x')
x = np.linspace(0, 10, 21)
y = model.eval(params, x=x)
That is, a Model doesn't "have" Parameters, a Model represents a functional
form. It can be evaluated with Parameters and the values for the expected
independent variables. A ModelResult does have Parameters.
…--Matt
|
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hello!
We are facing some issue trying to use LMFIT for inference.
Our training pipeline uses ExpressionModel, fitting it, then saving the
train params
as well as theconfidence
in a database.In our inference pipeline, we would like to initialize the same LMFIT model with the parameters found during training, similarly to loading from dumped model.
Something of this sort:
However, we cannot run
eval
without runningfit
first, or without dumping the trained model with dill.This sounds to us like a common use case, is there any way how to do it?
Beta Was this translation helpful? Give feedback.
All reactions