-
Notifications
You must be signed in to change notification settings - Fork 0
/
Parameter_Selection_Notes.txt
79 lines (70 loc) · 4.08 KB
/
Parameter_Selection_Notes.txt
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
CENTRAL IDEA: Neural Network helps find initial parameter estimations, making complex equations
more efficient
-Usually much faster than traditional parameter estimation
Parameter Estimation for Process Control (1991): A+
-Many important parameters that effect output of Process Control Algorithms
-Training examples
-LOOK INTO: algorithm models used in training??
-Input: sampled process input and output vectors
-Output: estimated associated process parameters
-Network implements function for estimating parameters from input/output
-IMPORTANT: paper is from 1991, several techniques have been outdated
-LOOK INTO: dynamic generation of training examples (during training?)
-Trained on 6 million examples
-When trained with 50% noise, test results significantly better (regularization)
-Trained for one particular parameter at a time
Fast cosmological parameter estimation (2008): B-
-Power spectra parameter computationally difficult to calculate
-Around 10000 data points used to train
-PICO machine learning technique (outdated)
-Calculate required spectra and corresponding likelihoods for experiments of interest
-Points chosen within specified parameter space
-Training set used to divide parameter space into ~100 smaller regions using k-means
-One network used for each power spectra (4)
Artificial Neural Network Approx... Parameter Estimation: A-
-Parameter estimation key step in development of high-fidelity
-Parameter estimation decomposed into two subproblems
1. Approximate NN model from given data
2. Use derived NN model to solve simplified optimization problem to obtain parameter estimates
-GRAPHS
Hurst Parameter Estimation: B
-Neural Network estimation method at least ten times faster than traditional methods
-Powers of each series are inputs to NN, Hurst parameter estimate is NN output
Estimating Reaction Rate Constants With Neural Networks (2007): A
-Reaction rate coefficient vital aspect of chemical kinetics
-Estimating rate constants even today is somewhat of an art
-Training data:
-Input data: set of concentrations calculated on different parameters (not params to be estimated)
-Output: some set of parameters to be estimated
-Important to simplify equations for training if possible
-Recommends limited number of parameters estimated per network
-Several methods to select parameter vectors for input:
1. Fix set of parameters that are not used in algorithm (network succeeds, accuracy is good)
2. Select set of numbers to use in input
3. Latin Hypercube for parameter selection (research)
-GREAT EXAMPLES
-Much work is needed to train networks, but once trained estimation is easy and "can be applied to same
reaction under different circumstances
Aeroelastic Aircraft Parameter Estimation (2000): A-
-Aircraft recently introduced with high degree of flexibility pose challenge for parameter estimation
-Trial and error to determine architecture of hidden layers
-Training data:
-IMPORTANT: Three network inputs are used, but only one is slightly changed from sample to sample
-Motion and control variables used as network inputs
-Force and moment coefficients used as output variables (use different networks for each?)
Neural Network Optimization of Dry Turning Parameters (2017): A-
-NN determined to successfully estimate dry turning parameters
-LOOK INTO: Model simulation
Quality Parameters for Surfaces Processed by Superfinishing (2017): A+++
-32 experiments chosen to use in NN study
-Reasoning for using NN: dependence between inputs and outputs is nonlinear
-Great general explanation of neural networks
-LOOK INTO: VLSI neural chips
-Interesting observation: NN that is used for pattern recognition uses one hidden layer;
NN used for nonlinear function approx. uses two (such as this in particular)
-I've figured this out by experimentation
-Architecture: two hidden layers, 5 neurons each
-Paper never explicitly writes out algorithms
-IMPORTANT: better to train for one parameter at a time (learns patterns faster this way)
-Input and output scaled during training (look at SCALE and RE-SCALE equation)
-60,000 epochs to converge