-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ENH: calculating d-prime from confusion matrices and samples #8
base: master
Are you sure you want to change the base?
Conversation
…lues (+ minor changes) --- tests should be added
Thanks, this looks good! It needs some refactoring + tests but it should be ready to go. Comments to follow. |
# | ||
# License: BSD | ||
|
||
__all__ = ['dprime'] | ||
__all__ = ['dprime', 'dprime_from_confusion_ova'] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
not sure about the _ova
part, it could be made more generic if one provide the binary output codes used to compute the confusion matrix -- what do you think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
By default it could be interpreted as OvR (only if it is square), but any binary-like output code could work -- we would just need a convention for it (e.g. -1 = False
, +1 = True
, 0 = ignore
), at this stage we should just assume it's OvR
and let the user know in the docstring, but possibly open up the possibility of other output codes (e.g. OvO
).
Thoughts?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That is a definite possibility. I like that convention --- will work on that.
Meanwhile, the current version of dprime_from_confusion_ova()
is meant to be used when there's no access to internal representation / decision making (like human data). This function computes n OVA d's from the given n-by-n confusion matrix --- but I admit that there's no clear analytical/mathematical connection between d' computed from a n-way confusion matrix and a 2-way binary classifier.
Do you have real-world tests that could be reduced to regression tests? |
Which part are you mentioning? (Didn't you put this comment by mistake?) On Wed, Jul 18, 2012 at 7:48 PM, Nicolas Pinto <
|
Methinks this would be closer to what you've suggested. Take a look and let me know if you have some comments! |
"""Computes the Accuracy of the predictions (also known as the | ||
zero-one score). | ||
|
||
def accuracy(A, B=None, mode=DEFAULT_ACCURACY_MODE, \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hu? Why is accuracy changing here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added a support for confusion matrices in accuracy()
as in dprime()
. There are some changes/rearrangements, so might be good to take a look at the whole code.
@hahong, your experimental features on dprime could go in |
(Gosh.. time goes fast.) You mean the "wildwest" branch? ;-) |
Actually the |
Aha, I see. That sounds good to me. |
You can basically consider |
Two functions were added:
dprime_from_confusion_ova()
anddprime_from_samp()
Note: unittests should be added!