You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When %O% is called with a specification of dfin both base-learners,
e.g. bbs(x1, df = df1) %O% bbs(x2, df = df2), the global dffor the
Kroneckered base-learner is computed as df = df1 * df2.
And thus the penalty has only one smoothness parameter lambdaresulting in an isotropic penalty, K = lambda * ( kronecker( diag, K1) + kronecker(K2, diag ) )
with global penalty K, marginal penalty matrices K1, K2and identity matrices diag.
But I would like to specify a Kroneckered effect with anisotropic penalty. Thus, I implemented a new operator %A%, which allows for a different amount of smoothness in the two directions.
For example bbs(x1, df = df1) %A% bbs(x2, df = df2) results in computing, two
different smoothing parameters lambda1, lambda2 for the two marginal effects and a global lambdato adjust for the global df, i.e. K = lambda * ( kronecker( diag, lambda1*K1) + kronecker(lambda2*K2, diag ) )
where lambda1 is computed for df1 and lambda2 is computed for df2. lambdais computed such that the global dfhold again df = df1 * df2.
The current implementation of %A% is experimental and cannot deal with weights in the model call.
See anisotropicKronecker.txt for the current EXPERIMENTAL implementation in R.
For a MWE on how to use %A% instead of %O% (does not make much sense in that case):
In issue #19 I covered the special case where lambda1=0 , which is covered here, if df1 = 'rank of design-matrix for x1'.
Do you think it would be generally helpful to have an anisotropic effect like %A% in mboost? Do you think the current implementation is sensible? Any comments and suggestions are welcome.
Best
Sarah
The text was updated successfully, but these errors were encountered:
When
%O%
is called with a specification ofdf
in both base-learners,e.g.
bbs(x1, df = df1) %O% bbs(x2, df = df2)
, the globaldf
for theKroneckered base-learner is computed as
df = df1 * df2
.And thus the penalty has only one smoothness parameter
lambda
resulting in an isotropic penalty,K = lambda * ( kronecker( diag, K1) + kronecker(K2, diag ) )
with global penalty
K
, marginal penalty matricesK1
,K2
and identity matricesdiag
.But I would like to specify a Kroneckered effect with anisotropic penalty. Thus, I implemented a new operator
%A%
, which allows for a different amount of smoothness in the two directions.For example
bbs(x1, df = df1) %A% bbs(x2, df = df2)
results in computing, twodifferent smoothing parameters
lambda1
,lambda2
for the two marginal effects and a globallambda
to adjust for the globaldf
, i.e.K = lambda * ( kronecker( diag, lambda1*K1) + kronecker(lambda2*K2, diag ) )
where
lambda1
is computed fordf1
andlambda2
is computed fordf2
.lambda
is computed such that the globaldf
hold againdf = df1 * df2
.The current implementation of
%A%
is experimental and cannot deal with weights in the model call.See anisotropicKronecker.txt for the current EXPERIMENTAL implementation in R.
For a MWE on how to use
%A%
instead of%O%
(does not make much sense in that case):In issue #19 I covered the special case where
lambda1=0
, which is covered here, if df1 = 'rank of design-matrix for x1'.Do you think it would be generally helpful to have an anisotropic effect like
%A%
in mboost? Do you think the current implementation is sensible? Any comments and suggestions are welcome.Best
Sarah
The text was updated successfully, but these errors were encountered: