From 8962ea20656e318c234d4c08ec136d781040686d Mon Sep 17 00:00:00 2001 From: Adrienne Mendrik <79082794+AdrienneMendrik@users.noreply.github.com> Date: Mon, 25 Mar 2024 09:26:54 +0100 Subject: [PATCH] Update score.py --- score.py | 12 +++++------- 1 file changed, 5 insertions(+), 7 deletions(-) diff --git a/score.py b/score.py index 5d7ce73..daf2eaf 100644 --- a/score.py +++ b/score.py @@ -1,17 +1,15 @@ """ -This script calls submission.py. Add your method to submission.py to run your -prediction method. +This script is used to calculate the metrics on the challenge leaderboards. -To test your submission use the following command: +It can be used with the following command: -python run.py predict +python score.py score For example: -python run.py predict data/PreFer_fake_data.csv +python score.py score PreFer_fake_data_predictions.csv PreFer_fake_data_outcome.csv -Optionally, you can use the score function to calculate evaluation scores given -your predictions and the ground truth within the training dataset. +Note: The ground truth (outcome) needs to be in a seperate file with two columns (nomem_encr, new_child) """