Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Benchmark results over standard EHR tasks like diagnosis prediction #41

Open
avani17101 opened this issue Jul 21, 2023 · 3 comments
Open
Labels
enhancement New feature or request

Comments

@avani17101
Copy link

avani17101 commented Jul 21, 2023

Was there any benchmarking done over EHR tasks like next disease, medication predictions tasks or any tasks described in this paper: A Comprehensive EHR Timeseries Pre-training Benchmark
@mmcdermott, @bnestor ?

@mmcdermott
Copy link
Owner

mmcdermott commented Jul 21, 2023 via email

@mmcdermott
Copy link
Owner

Worth noting, the paper you reference also uses MIMIC-III, not MIMIC-IV. Doesn't refute the value of getting results on standard benchmarks, but just wanted to clarify there is a dataset difference here.

@mmcdermott mmcdermott added the enhancement New feature or request label Jul 23, 2023
@avani17101
Copy link
Author

avani17101 commented Jul 24, 2023

@mmcdermott Thanks for the response! will be looking forward to its performance on such downstream evaluation tasks!

Also, truly noted the benchmark paper is on MIMIC-III, but there are many papers evaluating on a subset of those (length of stay, diagnosis levels, readmission, mortality, etc) over MIMIC-IV. For instance UniHPF.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants