Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Taking weighting seriously #487

Open
wants to merge 117 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 105 commits
Commits
Show all changes
117 commits
Select commit Hold shift + click to select a range
1754cbd
WIP
gragusa Jun 10, 2022
1d778a5
WIP
gragusa Jun 15, 2022
12121a3
WIP
gragusa Jun 15, 2022
4363ba4
Taking weights seriously
gragusa Jun 17, 2022
ca702dc
WIP
gragusa Jun 18, 2022
e2b2d12
Taking weights seriously
gragusa Jun 21, 2022
bc8709a
Merge branch 'master' of https://github.com/JuliaStats/GLM.jl into Ju…
gragusa Jun 21, 2022
84cd990
Add depwarn for passing wts with Vector
gragusa Jun 22, 2022
cbc329f
Cosmettic changes
gragusa Jun 22, 2022
23d67f5
WIP
gragusa Jun 23, 2022
f4d90a9
Fix loglik for weighted models
gragusa Jul 4, 2022
6b7d95c
Fix remaining issues
gragusa Jul 15, 2022
c236b82
Final commit
gragusa Jul 15, 2022
d4bd0c2
Merge branch 'master'
gragusa Jul 15, 2022
8bdfb55
Fix merge
gragusa Jul 15, 2022
3eb2ca4
Fix nulldeviance
gragusa Jul 16, 2022
63c8358
Bypass crossmodelmatrix drom StatsAPI
gragusa Jul 16, 2022
e93a919
Delete momentmatrix.jl
gragusa Jul 16, 2022
7bb0959
Delete scratch.jl
gragusa Jul 16, 2022
ded17a8
Delete settings.json
gragusa Jul 16, 2022
3346774
AbstractWeights are required to be real
gragusa Sep 5, 2022
7376e78
Update src/glmfit.jl
gragusa Sep 5, 2022
a738268
Apply suggestions from code review
gragusa Sep 5, 2022
c9459e7
Merge pull request #2 from JuliaStats/master
gragusa Sep 5, 2022
6af3ca5
Throw error if GlmResp are not AbastractWeights
gragusa Sep 5, 2022
0ded1d4
Addressing review comments
gragusa Sep 5, 2022
d923e48
Reexport aweights, pweights, fweights
gragusa Sep 5, 2022
84f27d1
Fixed remaining issues with null loglikelihood
gragusa Sep 6, 2022
8804dc1
Fix nullloglikelihood tests
gragusa Sep 6, 2022
7f3aa36
Do not dispatch on Weights but use if
gragusa Sep 6, 2022
f67a8e0
Do not dispatch on Weights use if
gragusa Sep 6, 2022
23a3e87
Fix inferred test
gragusa Sep 6, 2022
5481284
Use if instead of dispatching on Weights
gragusa Sep 6, 2022
d12222e
Add doc for weights and fix output
gragusa Sep 7, 2022
a17e812
Fix docs failures
gragusa Sep 7, 2022
58dec0c
Fix pweights stderror even for rank deficient des
gragusa Sep 7, 2022
a6f5c66
Add test for pweights stderror
gragusa Sep 7, 2022
92ddb1e
Export UnitWeights
gragusa Sep 7, 2022
0c61fff
Fix documentation
gragusa Sep 7, 2022
8b0e8e1
Mkae cooksdistance work with rank deficient design
gragusa Sep 7, 2022
f609f06
Test cooksdistance with rank deficient design
gragusa Sep 7, 2022
23f3d03
Fix CholeskyPivoted signature in docs
gragusa Sep 8, 2022
2749b84
Make nancolidx v1.0 and v1.1 friendly
gragusa Sep 8, 2022
82e472b
Fix signatures
gragusa Sep 9, 2022
2d6aaed
Correct implementation of momentmatrix
gragusa Sep 9, 2022
dbc9ae9
Test moment matrix
gragusa Sep 9, 2022
e0d9cdf
Apply suggestions from code review
gragusa Sep 23, 2022
46e8f92
Incorporate suggestions of reviewer
gragusa Sep 23, 2022
6df401b
Deals with review comments
gragusa Sep 24, 2022
ca15eb8
Small fix
gragusa Sep 24, 2022
0c18ae9
Small fix
gragusa Sep 25, 2022
54d68d1
Apply suggestions from code review
gragusa Oct 3, 2022
422a8cd
Merge branch 'master' into JuliaStats-master
gragusa Oct 3, 2022
d6d4e6b
Fix vcov dispatch for vcov
gragusa Oct 3, 2022
b457d74
Fix dispatch of _vcov
gragusa Oct 3, 2022
b087679
Revert changes
gragusa Oct 3, 2022
a44e137
Update src/glmfit.jl
gragusa Oct 3, 2022
11db2c4
Fix weighted keyword in modelmatrix
gragusa Oct 3, 2022
b649d4f
perf in nulldeviance for unweighted models
gragusa Oct 3, 2022
170148c
Merge branch 'JuliaStats-master' of github.com:gragusa/GLM.jl into Ju…
gragusa Oct 3, 2022
29c43cb
Fixed std error for probability weights
gragusa Oct 19, 2022
279e533
Getting there (& switch Analytics to Importance)
gragusa Oct 20, 2022
afb145e
.= instead of copy!
gragusa Oct 20, 2022
2cead0a
Remove comments
gragusa Oct 20, 2022
a1ec49f
up
gragusa Oct 20, 2022
97bf28d
Speedup cooksdistance
gragusa Oct 23, 2022
9ce2d89
Revert back to AnalyticWeights
gragusa Oct 24, 2022
9bddf63
Add extensive tests for AnalyticWeights
gragusa Oct 24, 2022
3fe045a
Add extensive tests for AnalyticWeights
gragusa Oct 24, 2022
852e307
Delete scratch.jl
gragusa Oct 25, 2022
d1ba3e5
Delete analytic_weights.jl
gragusa Oct 25, 2022
831f280
Follow reviewer suggestions [Batch 1]
gragusa Nov 15, 2022
b00dc16
Follow reviewer's suggestions [Batch 2]
gragusa Nov 15, 2022
0825324
probability weights vcov uses momentmatrix
gragusa Nov 15, 2022
48d15fb
Fix ProbabilityWeights vcov and tests
gragusa Nov 16, 2022
3338eab
Use leverage from StasAPI
gragusa Nov 17, 2022
c27c749
Merge branch 'master' into JuliaStats-master
gragusa Nov 17, 2022
970e26e
Rebase against master
gragusa Nov 17, 2022
8832e9d
Fix test
gragusa Nov 17, 2022
9eb2390
Merge remote-tracking branch 'origin/master' into JuliaStats-master
gragusa Dec 20, 2022
587c129
Test on 1.6
gragusa Dec 20, 2022
fa63a9a
Address reviwer comments
gragusa Dec 29, 2022
807731a
Merge branch 'master' of github.com:JuliaStats/GLM.jl into JuliaStats…
gragusa Jun 16, 2023
72996fc
Merge branch 'master' into JuliaStats-master
andreasnoack Nov 19, 2024
1ee383a
Merge remote-tracking branch 'upstream/master' into JuliaStats-master
gragusa Nov 19, 2024
ba52ce9
Merge from origin
gragusa Nov 19, 2024
5e790df
Fix broken test of dof_residual
gragusa Nov 19, 2024
50c1a96
Fix testing issues
gragusa Nov 19, 2024
c4f7959
Fix docs
gragusa Nov 19, 2024
d2b5cb0
Added tests for ftest. They throw for pweights
gragusa Nov 25, 2024
cd165d7
Make ftest throw if a model weighted by pweights is passed
gragusa Nov 25, 2024
606a419
Fix how loglikelihood throws for pweights weighted models
gragusa Nov 25, 2024
a1a1e10
Merge branch 'master' of github.com:JuliaStats/GLM.jl into JuliaStats…
gragusa Nov 25, 2024
5d948de
Remove StatsPlots dependence.
gragusa Nov 25, 2024
4fb18df
Fix weighting with :qr method.
gragusa Nov 25, 2024
56d81ae
Add filter to jldoctest string
gragusa Dec 11, 2024
a2357cf
Fix problem with docstrings
gragusa Dec 11, 2024
6068d2a
Update docs/src/index.md
gragusa Dec 12, 2024
930a8cb
Remove trailing white spaces
gragusa Dec 12, 2024
107d17d
Add mention of UnitWeights in the weights discussion
gragusa Dec 12, 2024
a003b10
Remove trailing white spaces
gragusa Dec 12, 2024
1c06c7e
Change delbeta! signature
gragusa Dec 12, 2024
b41cce7
Add tests for dropcollinear=false
gragusa Dec 12, 2024
2730277
Minor cosmethic changes
gragusa Dec 12, 2024
cdeb1a3
Add weighting information in COMMON_FIT_KWARGS_DOCS
gragusa Dec 12, 2024
95d506e
Add test for leverage
gragusa Dec 13, 2024
f124589
[wip] work on leverage
gragusa Dec 13, 2024
cbdadbc
Use inverse
gragusa Dec 13, 2024
2386ab9
Test leverage
gragusa Dec 13, 2024
36326ff
Comment cookdistance
gragusa Dec 13, 2024
f26bc0e
Committed by mistake
gragusa Dec 13, 2024
2bc2138
leverage returns a vec
gragusa Dec 13, 2024
0569600
Fix cookdistance return type
gragusa Dec 13, 2024
dd1b4a8
Update docs/src/index.md
gragusa Dec 18, 2024
1c5953d
Update docs/src/index.md
gragusa Dec 18, 2024
cd39578
Update src/glmfit.jl
gragusa Dec 18, 2024
574ec69
Update src/linpred.jl
gragusa Dec 18, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 2 additions & 1 deletion docs/Project.toml
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@ CategoricalArrays = "324d7699-5711-5eae-9e2f-1d82baa6b597"
DataFrames = "a93c6f00-e57d-5684-b7b6-d8193f3e46c0"
Distributions = "31c24e10-a181-5473-b8eb-7969acd0382f"
Documenter = "e30172f5-a6a5-5a46-863b-614d45cd2de4"
GLM = "38e38edf-8417-5370-95a0-9cbb8c7f171a"
Optim = "429524aa-4258-5aef-a3af-852621145aeb"
RDatasets = "ce6b1742-4840-55fa-b093-852dadbb1d8b"
StableRNGs = "860ef19b-820b-49d6-a774-d7a799459cd3"
Expand All @@ -12,4 +13,4 @@ StatsModels = "3eaba693-59b7-5ba5-a881-562e759f1c8d"
[compat]
DataFrames = "1"
Documenter = "1"
Optim = "1.6.2"
Optim = "1.6.2"
2 changes: 1 addition & 1 deletion docs/src/api.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

```@meta
DocTestSetup = quote
using CategoricalArrays, DataFrames, Distributions, GLM, RDatasets
using CategoricalArrays, DataFrames, Distributions, GLM, RDatasets, StableRNGs
end
```

Expand Down
45 changes: 12 additions & 33 deletions docs/src/examples.md
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,7 @@ julia> dof(ols)
3

julia> dof_residual(ols)
1.0
1
nalimilan marked this conversation as resolved.
Show resolved Hide resolved

julia> round(aic(ols); digits=5)
5.84252
Expand Down Expand Up @@ -214,8 +214,8 @@ sales ^ 2 -6.94594e-9 3.72614e-9 -1.86 0.0725 -1.45667e-8 6.7487e-10
```jldoctest
julia> data = DataFrame(X=[1,2,2], Y=[1,0,1])
3×2 DataFrame
Row │ X Y
│ Int64 Int64
Row │ X Y
│ Int64 Int64
─────┼──────────────
1 │ 1 1
2 │ 2 0
Expand Down Expand Up @@ -319,8 +319,8 @@ julia> using GLM, RDatasets

julia> form = dataset("datasets", "Formaldehyde")
6×2 DataFrame
Row │ Carb OptDen
│ Float64 Float64
Row │ Carb OptDen
│ Float64 Float64
─────┼──────────────────
1 │ 0.1 0.086
2 │ 0.3 0.269
Expand Down Expand Up @@ -473,8 +473,8 @@ julia> dobson = DataFrame(Counts = [18.,17,15,20,10,21,25,13,13],
Outcome = categorical([1,2,3,1,2,3,1,2,3]),
Treatment = categorical([1,1,1,2,2,2,3,3,3]))
9×3 DataFrame
Row │ Counts Outcome Treatment
│ Float64 Cat… Cat…
Row │ Counts Outcome Treatment
│ Float64 Cat… Cat…
─────┼─────────────────────────────
1 │ 18.0 1 1
2 │ 17.0 2 1
Expand Down Expand Up @@ -510,32 +510,11 @@ julia> round(deviance(gm1), digits=5)

In this example, we choose the best model from a set of λs, based on minimum BIC.

```jldoctest
```jldoctest; filter = r"(\d*)\.(\d{7})\d+" => s"\1.\2***"
julia> using GLM, RDatasets, StatsBase, DataFrames, Optim

julia> trees = DataFrame(dataset("datasets", "trees"))
31×3 DataFrame
Row │ Girth Height Volume
│ Float64 Int64 Float64
─────┼──────────────────────────
1 │ 8.3 70 10.3
2 │ 8.6 65 10.3
3 │ 8.8 63 10.2
4 │ 10.5 72 16.4
5 │ 10.7 81 18.8
6 │ 10.8 83 19.7
7 │ 11.0 66 15.6
8 │ 11.0 75 18.2
⋮ │ ⋮ ⋮ ⋮
25 │ 16.3 77 42.6
26 │ 17.3 81 55.4
27 │ 17.5 82 55.7
28 │ 17.9 80 58.3
29 │ 18.0 80 51.5
30 │ 18.0 80 51.0
31 │ 20.6 87 77.0
16 rows omitted

julia> trees = DataFrame(dataset("datasets", "trees"));

julia> bic_glm(λ) = bic(glm(@formula(Volume ~ Height + Girth), trees, Normal(), PowerLink(λ)));

julia> optimal_bic = optimize(bic_glm, -1.0, 1.0);
Expand All @@ -554,9 +533,9 @@ Coefficients:
────────────────────────────────────────────────────────────────────────────
(Intercept) -1.07586 0.352543 -3.05 0.0023 -1.76684 -0.384892
Height 0.0232172 0.00523331 4.44 <1e-05 0.0129601 0.0334743
Girth 0.242837 0.00922555 26.32 <1e-99 0.224756 0.260919
Girth 0.242837 0.00922556 26.32 <1e-99 0.224756 0.260919
────────────────────────────────────────────────────────────────────────────

julia> round(optimal_bic.minimum, digits=5)
156.37638
```
```
114 changes: 110 additions & 4 deletions docs/src/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -123,6 +123,110 @@ x: 4 -0.032673 0.0797865 -0.41 0.6831 -0.191048 0.125702
───────────────────────────────────────────────────────────────────────────
```

## Weighting

nalimilan marked this conversation as resolved.
Show resolved Hide resolved
Both `lm` and `glm` allow weighted estimation. The three different
gragusa marked this conversation as resolved.
Show resolved Hide resolved
[types of weights](https://juliastats.org/StatsBase.jl/stable/weights/) defined in
[StatsBase.jl](https://github.com/JuliaStats/StatsBase.jl) can be used to fit a model:

gragusa marked this conversation as resolved.
Show resolved Hide resolved
- `AnalyticWeights` describe a non-random relative importance (usually between 0 and 1) for
each observation. These weights may also be referred to as reliability weights, precision
weights or inverse variance weights. These are typically used when the observations being
weighted are aggregate values (e.g., averages) with differing variances.
- `FrequencyWeights` describe the number of times (or frequency) each observation was seen.
These weights may also be referred to as case weights or repeat weights.
- `ProbabilityWeights` represent the inverse of the sampling probability for each observation,
providing a correction mechanism for under- or over-sampling certain population groups.
These weights may also be referred to as sampling weights.

`GLM.jl` internally uses UnitWeights for unweighted regression. When no weights are specified, the model defaults to using `UnitWeights`, effectively treating all observations as equally weighted.
gragusa marked this conversation as resolved.
Show resolved Hide resolved

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we add a comment somewhere how these weights are later treated in estimation?

To indicate which kind of weights should be used, the vector of weights must be wrapped in
one of the three weights types, and then passed to the `weights` keyword argument.
Short-hand functions `aweights`, `fweights`, and `pweights` can be used to construct
`AnalyticWeights`, `FrequencyWeights`, and `ProbabilityWeights`, respectively.

We illustrate the API with randomly generated data.

```jldoctest weights
julia> using StableRNGs, DataFrames, GLM

julia> data = DataFrame(y = rand(StableRNG(1), 100), x = randn(StableRNG(2), 100), weights = repeat([1, 2, 3, 4], 25));

julia> m = lm(@formula(y ~ x), data)
LinearModel

y ~ 1 + x

Coefficients:
──────────────────────────────────────────────────────────────────────────
Coef. Std. Error t Pr(>|t|) Lower 95% Upper 95%
──────────────────────────────────────────────────────────────────────────
(Intercept) 0.517369 0.0280232 18.46 <1e-32 0.461758 0.57298
x -0.0500249 0.0307201 -1.63 0.1066 -0.110988 0.0109382
──────────────────────────────────────────────────────────────────────────

julia> m_aweights = lm(@formula(y ~ x), data, wts=aweights(data.weights))
LinearModel

y ~ 1 + x

Coefficients:
──────────────────────────────────────────────────────────────────────────
Coef. Std. Error t Pr(>|t|) Lower 95% Upper 95%
──────────────────────────────────────────────────────────────────────────
(Intercept) 0.51673 0.0270707 19.09 <1e-34 0.463009 0.570451
x -0.0478667 0.0308395 -1.55 0.1239 -0.109067 0.0133333
──────────────────────────────────────────────────────────────────────────

julia> m_fweights = lm(@formula(y ~ x), data, wts=fweights(data.weights))
LinearModel

y ~ 1 + x

Coefficients:
─────────────────────────────────────────────────────────────────────────────
Coef. Std. Error t Pr(>|t|) Lower 95% Upper 95%
─────────────────────────────────────────────────────────────────────────────
(Intercept) 0.51673 0.0170172 30.37 <1e-84 0.483213 0.550246
x -0.0478667 0.0193863 -2.47 0.0142 -0.0860494 -0.00968394
─────────────────────────────────────────────────────────────────────────────

julia> m_pweights = lm(@formula(y ~ x), data, wts=pweights(data.weights))
LinearModel

y ~ 1 + x

Coefficients:
───────────────────────────────────────────────────────────────────────────
Coef. Std. Error t Pr(>|t|) Lower 95% Upper 95%
───────────────────────────────────────────────────────────────────────────
(Intercept) 0.51673 0.0287193 17.99 <1e-32 0.459737 0.573722
x -0.0478667 0.0265532 -1.80 0.0745 -0.100561 0.00482739
───────────────────────────────────────────────────────────────────────────

```

!!! warning

In the old API, weights were passed as `AbstractVectors` and were silently treated in
the internal computation of standard errors and related quantities as `FrequencyWeights`.
Passing weights as `AbstractVector` is still allowed for backward compatibility, but it
is deprecated. When weights are passed following the old API, they are now coerced to
`FrequencyWeights` and a deprecation warning is issued.

The type of the weights will affect the variance of the estimated coefficients and the
quantities involving this variance. The coefficient point estimates will be the same
regardless of the type of weights.

```jldoctest weights
julia> loglikelihood(m_aweights)
-16.296307561384253

julia> loglikelihood(m_fweights)
-25.51860961756451
```

## Comparing models with F-test

Comparisons between two or more linear models can be performed using the `ftest` function,
Expand Down Expand Up @@ -176,8 +280,8 @@ Many of the methods provided by this package have names similar to those in [R](
- `vcov`: variance-covariance matrix of the coefficient estimates


Note that the canonical link for negative binomial regression is `NegativeBinomialLink`, but
in practice one typically uses `LogLink`.
Note that the canonical link for negative binomial regression is `NegativeBinomialLink`,
but in practice one typically uses `LogLink`.

```jldoctest methods
julia> using GLM, DataFrames, StatsBase
Expand Down Expand Up @@ -209,12 +313,14 @@ julia> round.(predict(mdl, test_data); digits=8)
9.33333333
```

The [`cooksdistance`](@ref) method computes [Cook's distance](https://en.wikipedia.org/wiki/Cook%27s_distance) for each observation used to fit a linear model, giving an estimate of the influence of each data point.
The [`cooksdistance`](@ref) method computes
[Cook's distance](https://en.wikipedia.org/wiki/Cook%27s_distance) for each observation
used to fit a linear model, giving an estimate of the influence of each data point.
Note that it's currently only implemented for linear models without weights.

```jldoctest methods
julia> round.(cooksdistance(mdl); digits=8)
3-element Vector{Float64}:
3×1 Matrix{Float64}:
2.5
0.25
2.5
Expand Down
27 changes: 15 additions & 12 deletions src/GLM.jl
Original file line number Diff line number Diff line change
Expand Up @@ -12,17 +12,18 @@ module GLM
import Statistics: cor
using StatsAPI
import StatsBase: coef, coeftable, coefnames, confint, deviance, nulldeviance, dof, dof_residual,
loglikelihood, nullloglikelihood, nobs, stderror, vcov,
residuals, predict, predict!,
fitted, fit, model_response, response, modelmatrix, r2, r², adjr2, adjr², PValue
loglikelihood, nullloglikelihood, nobs, stderror, vcov, residuals, predict, predict!,
fitted, fit, model_response, response, modelmatrix, r2, r², adjr2, adjr²,
PValue, weights, leverage
import StatsFuns: xlogy
import SpecialFunctions: erfc, erfcinv, digamma, trigamma
import StatsModels: hasintercept
import Tables
export coef, coeftable, confint, deviance, nulldeviance, dof, dof_residual,
loglikelihood, nullloglikelihood, nobs, stderror, vcov, residuals, predict,
loglikelihood, nullloglikelihood, nobs, stderror, vcov, residuals, predict, predict!,
fitted, fit, fit!, model_response, response, modelmatrix, r2, r², adjr2, adjr²,
cooksdistance, hasintercept, dispersion, vif, gvif, termnames
cooksdistance, hasintercept, dispersion, vif, gvif, termnames, weights, AnalyticWeights,
ProbabilityWeights, FrequencyWeights, UnitWeights, uweights, fweights, pweights, aweights, leverage

gragusa marked this conversation as resolved.
Show resolved Hide resolved
export
# types
Expand Down Expand Up @@ -109,13 +110,15 @@ module GLM
If `method=:cholesky` (the default), then the `Cholesky` decomposition method will be used.
If `method=:qr`, then the `QR` decomposition method (which is more stable
but slower) will be used.
- `wts::Vector=similar(y,0)`: Prior frequency (a.k.a. case) weights of observations.
Such weights are equivalent to repeating each observation a number of times equal
to its weight. Do note that this interpretation gives equal point estimates but
different standard errors from analytical (a.k.a. inverse variance) weights and
from probability (a.k.a. sampling) weights which are the default in some other
software.
Can be length 0 to indicate no weighting (default).
- `wts::AbstractWeights`: Weights of observations.
The weights can be of type `AnalyticWeights`, `FrequencyWeights`,
`ProbabilityWeights`, or `UnitWeights`. `AnalyticWeights` describe a non-random
relative importance (usually between 0 and 1) for each observation. These weights may
also be referred to as reliability weights, precision weights or inverse variance weights.
`FrequencyWeights` describe the number of times (or frequency) each observation was seen.
`ProbabilityWeights` represent the inverse of the sampling probability for each observation,
providing a correction mechanism for under- or over-sampling certain population groups. `UnitWeights`
(default) describe the case in which all weights are equal to 1 (so no weighting takes place).
- `contrasts::AbstractDict{Symbol}=Dict{Symbol,Any}()`: a `Dict` mapping term names
(as `Symbol`s) to term types (e.g. `ContinuousTerm`) or contrasts
(e.g., `HelmertCoding()`, `SeqDiffCoding(; levels=["a", "b", "c"])`,
Expand Down
9 changes: 8 additions & 1 deletion src/ftest.jl
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,10 @@ F-statistic: 241.62 on 12 observations and 1 degrees of freedom, p-value: <1e-07
"""
function ftest(mod::LinearModel)
hasintercept(mod) || throw(ArgumentError("ftest only works for models with an intercept"))

wts = weights(mod)
if wts isa ProbabilityWeights
throw(ArgumentError("`ftest` for probability weighted models is not currently supported."))
end
rss = deviance(mod)
tss = nulldeviance(mod)

Expand Down Expand Up @@ -228,3 +231,7 @@ function show(io::IO, ftr::FTestResult{N}) where N
end
print(io, '─'^totwidth)
end

function ftest(r::LinearModel{T,<:ProbabilityWeights}) where {T}
throw(ArgumentError("`ftest` for probability weighted models is not currently supported."))
end
Loading
Loading