Versions are tagged in git under v0.x
- able to compare not only with one specified, but also with others -> that should probably be a command line tool (after training)
- distributions into the same plot to make them more comparable -> histograms: old vs new
- add gradient clipping -> should be added
- add categories in config
- add linting
- add bitbucket pipeline to run tests automatically
- plot creation for already trained weights -> write cli where weights are loaded
- only train on the first reco track of a track
- make an average loss graph from all experts -> how to communicate with the other experts? -> easy cause we use threads -> use class where all log to and if all have logged for an epoch we can log to tensorboard and create visualizations -> problems as they are somewhat in different processes and therefore cant really communicate, solve with shared memory maybe
- baseline model v2 with BN and Relu
- reweighting of trainings sample, by duplicating samples per bin or by reweighting them per bin -> or random sampling with same prob. per bin, idea: make classification problem
- train with different batchsizes and learning rates per expert
- write readme page
- add unit tests
- create pickle file with z, theta predictions after training for future comparision
- dont use dataset predictions but optionally the ones from older trainings
- file with single output number -> over all experts and per expert and maybe compare to previous
- distribution sampling with in config: distribution should be configurable
- organize main better and support cmd args
- global experiment log
- filter functions
- native filter datasets
- add presentation and finish readme
- fix inhereting bug in config
- add cli parameter which can overwrite config
- add statistical values such as mean and std to the plots (in form of legends)
- implement rprop, generalize optimizers and put them into config
- fix weight init
- act function into the config
- fix x axis of hist plot for gt data
- add diff hist plot -> z(Reco-Neuro)
- add std(z(Reco-Neuro)) to tensorboard plot metrics and relative old vs new
- add std bins plot
- pin pytorch lightning version
- rescale z/theta outputs to represent real physical values
- reimplement dataset caching
- in the end of the training create weights, predication dataset, plots as pngs, and maybe evaluate test?
- validate with best trained data
- export (the best) weights in the end of the training
- description in experiment log
- add extending of other config
- add training for only z component
- add easy dict
- save git diff and commit id into log
- add change log
- per expert hparams -> which overwrite the default value e.g. different bach sizes
- issue with singal shutdown in thread
- models and critic function also in configuration using dicts