You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
So far, we haven't, no; the focus to date has just been on the software/API
development and not on specific model architectures. It is a great idea,
though, and we're very open to contributions on that front! It's also on
our roadmap going forward.
Worth noting, the paper you reference also uses MIMIC-III, not MIMIC-IV. Doesn't refute the value of getting results on standard benchmarks, but just wanted to clarify there is a dataset difference here.
@mmcdermott Thanks for the response! will be looking forward to its performance on such downstream evaluation tasks!
Also, truly noted the benchmark paper is on MIMIC-III, but there are many papers evaluating on a subset of those (length of stay, diagnosis levels, readmission, mortality, etc) over MIMIC-IV. For instance UniHPF.
Was there any benchmarking done over EHR tasks like next disease, medication predictions tasks or any tasks described in this paper: A Comprehensive EHR Timeseries Pre-training Benchmark
@mmcdermott, @bnestor ?
The text was updated successfully, but these errors were encountered: