Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Compute AUC without needing specific thresholds #98

Open
ericphanson opened this issue Dec 19, 2022 · 1 comment
Open

Compute AUC without needing specific thresholds #98

ericphanson opened this issue Dec 19, 2022 · 1 comment

Comments

@ericphanson
Copy link
Member

ericphanson commented Dec 19, 2022

Currently, Lighthouse asks for a set of input thresholds (e.g. 0.0:0.01:1.0), then computes binary stats for each threshold, then forms corresponding fpr/tpr curves, then computes the area underneath those curves w/ the trapezoid rule.

I came across An Improved Model Selection Heuristic for AUC today which among other things, pointed out that you can compute the AUC without reference to specific thresholds by sorting the soft-labels of all instances, and then doing a particular kind of count:

Screen Shot 2022-12-19 at 6 36 22 PM

Screen Shot 2022-12-19 at 6 39 38 PM

I think this is a nice algorithm we should consider using, so that the accuracy of the AUC estimate is not influenced by the granularity of the thresholds chosen.

@ericphanson
Copy link
Member Author

ericphanson commented Dec 19, 2022

Some (minimally-tested) code to compute the ROC AUC in this way (plain) along with the "scored" ROC AUC they discuss in that paper:

function ROC_AUC(positive_scores, negative_scores)
    m = length(positive_scores)
    n = length(negative_scores)

    T = promote_type(eltype(positive_scores), eltype(negative_scores))
    table = @NamedTuple{score::T, label::String}[]
    sizehint!(table, m+n)

    append!(table, ((; score, label="positive") for score in positive_scores))
    append!(table, ((; score, label="negative") for score in negative_scores))
    sort!(table, by = x -> x.score, rev=true)

    scored_AOC = zero(T)
    scored_AUC = zero(T)
    scored_r = zero(T)
    scored_c = zero(T)
    c = 0
    AUC = zero(T)
    for row in table
        if row.label == "positive"
            scored_c += row.score
            scored_AOC += scored_r
            c += 1
        else
            scored_r += row.score
            scored_AUC += scored_c
            AUC += c
        end
    end
    R⁻ = (m*scored_r - scored_AOC)/(m*n)
    R⁺ = scored_AUC/(m*n)

    scored = R⁺ - R⁻ 
    plain = AUC / (m*n)
    return (; scored, plain)
end

# Example 1 from the paper - agrees!
using Test

M1 = ROC_AUC([1.0, 0.7, 0.6],  [0.5, 0.4, 0.0])
M2 = ROC_AUC([1.0, 0.9, 0.5],  [0.6, 0.2, 0.0])

@test M1 == (scored = 0.46666666666666656, plain = 1.0)
@test M2 == (scored = 0.5444444444444443, plain = 0.8888888888888888)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant