-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Understanding doepipeline on simple example #33
Comments
Hi @bgriffen, thank you for your interest and well-described questions! I will try to answer your questions below.
I hope this answers your questions. |
Hi Richard -- thank you for the clarification. My understanding improves each day. Just a few additional comments/questions below if it's OK.
Indeed, I did think I was overstepping the mark with my multivariate normal distribution. Though, with a sufficiently large variance it should be easier to model with a
OK, I'll experiment with the other designs. The trouble I face is using a design which can be implemented physically vs. what the response surface function may be (unknown a priori). I need to actually perform these experiments and physically, not in-silico so two level ff or three level ff is easier to arrange but may not be the best given ccc allows quadratic responses to be well described. On that though, if I have historic data that doesn't fit the current design, e.g. its a sparse sampling, can that data still be used to initialize the search? Or do you have to set up the whole pipeline "clean" and carry out the experimental designs as generated by fullfactorial2levels and attach their responses once measured? Additionally, once say I perform the fullfactorial2level runs, can I then change the design mid-way through optimization to get a better functional fit/prediction of optima.
I guess what I poorly asked, was that I need to run a predict_optimum step first in order for the OLS model to be used to generate new experiments. Just to double check -- my order of doing things is correct? I added a comment to lines under question below
I'm just struggling to get the order of Lastly, if a factor's weight is ~0 to the prediction, is there a way to remove it when Thank you for your help. Much appreciated. |
Great package. I've just begun tinkering at a very low level to make sure my intuition reflects what the code is doing. I've set a two factor experiment with a normal distribution for a response. For example;
I then now want to iteratively optimize as per the doepiepline to hopefully converge on the solution set by mu_x and mu_y.
Now I create a simple function to loop through each experimental design at each step. I made one or two modifications to return the model and optima to inspect interactively in my notebook.
The dashed line is where the first initial guess is then each 1,2,3,4,5,6 is each iteration. The yellow dot being the hard set desired optimal to be found.
A few questions:
exp.get_best_experiment
?get_best_experiment
andget_optimal get_optimal_settings
only can provide experimental conditions that have already been tested and not actually interpolated optimal responses? I would expect after maybe three or four iterations for the system to predict the optimal to be closer to the true optimal, no? I guess I'm a bit confused over the nomenclature.model.summary()
being quite desirable, it still didn't converge (see below).designer.py
at _new_optimization_design(self) I can't seem to see how it uses OLS to generate a new set of experimental settings, at least in my intuitive arrangement here:Lastly, I get the following:
Given
mu_x = 60
andmu_y = 75
, I'm just wondering if I've done something wrong, set it up incorrectly or taken the pipeline into an area it isn't best suited. Apologies for any silly errors, just trying to understand what's going on as the examples provided have a certain overhead to getting started. A very simple purely Pythonic example would be greatly appreciated. Thanks for any help you might provide.The text was updated successfully, but these errors were encountered: