Skip to content

Using the LinearModel helper class

Tim Hallett edited this page Jan 28, 2020 · 57 revisions

When would you use the lm.LinearModel helper class?

A common situation in the model is that you want to determine the probability of an event happening for each person and that probability is to be based on characteristics of that person (e.g. their sex and age). Or, perhaps, that the time that a newly infected person is expected to survive with some condition is dependent on some characteristics of persons (e.g. their age at infection and their BMI). In these situations, you could create a lots of statement like this:

prob = 0.01
for person_id in df.loc[df.is_alive].index:
   if df.at[person_id, 'age_years'] < 5:
       prob = prob * 10
   elif df.at[person_id, 'age_years'] < 10:
       prob = prob * 2

Or:

prob = pd.Series(0.01, index = df.index)
prob.loc[df['age_years'] < 5] = prob.loc[df['age_years'] < 5] * 10
prob.loc[df['age_years'] < 10] = prob.loc[df['age_years'] < 5] * 2

But, this gets messy for lots of conditions and this logic will often need to be used in several places (e.g. initialise_population and in the main module event). So, the LinearModel is a way of neatly containing all the information about how something is determined as a function of individual characteristics.

If you find yourself doing anything like the above, use a LinearModel instead!

When would you not use the helper class?

  • LinearModel helpers are very much an alpha-feature of the framework!
  • The way the class works is by abstracting over direct operations on the Pandas dataframe. We don't know whether this will have a significant performance cost.

Where should I setup and store a LinearModel?

  • A LinearModel can be created anywhere in your code and stored for future execution. They are not supported as PARAMETERS of the modules but can be saved as instance variables (i.e. self.xxx) of modules and events. If you do intend to store LMs in this way, we recommend setting up a dictionary in the module's or event's __init__ method with self.lm = dict() and adding any LMs you create to the dictionary with self.lm['tobacco usage'] = LinearModel(xxx). This is to avoid having to initialise many instance variables in your __init__.

  • An expected common usage will be to define this read_parameters(). You will have read in parameters from an external file and can then assemble these models and store them (as above). They will be then ready for use in initialise_population() and beyond!

Setup a LinearModel

  • A LinearModel is created by specifying its (i) type, (ii) intercept and (iii) one or more Predictors
  • See test examples, and how Panda ops in the Lifestyle module can use the LinearModel class: test_logistic_application_low_ex and test_logistic_application_tob
  • General Outline:
     my_lm = LinearModel(
     		    LinearModelType.ADDITIVE,   # or MULTIPLICATIVE or LOGISTIC
     		    0.0,  # INTERCEPT CAN BE FLOAT OR INTEGER
     		    Predictor('property_1').when(..., 0.1).when(..., 0.2)
     		    Predictor('property_2').when(True, 0.01).otherwise(0.02),
     			...
     		    Predictor('property_n').when(True, 0.00001).otherwise(0.00002)
     		)
    

Here is the canonical example of a LinearModelType.LOGISTIC model:

    baseline_prob = 0.5
    baseline_odds = prob / (1 - prob)
    OR_X = 2   # Odds Ratio for Factor 'X'
    OR_Y = 5   # Odds Ratio for Factor 'Y'

    eq = LinearModel(
        LinearModelType.LOGISTIC,                # We specify the Logistic model
        baseline_odds,                           # The intercept term is the baseline_odds 
        Predictor('FactorX').when(True, OR_X),   # We declare that 'FactorX' is a predictor. Anyone that has 'FactorX' as True, will have a higher odds, as given by OR_X
        Predictor('FactorY').when(True, OR_Y)   # We declare that 'FactorY' is a predictor. Anyone that has 'FactorY' as True, will have a higher odds, as given by OR_Y
    )

    df = pd.DataFrame(data={
        'FactorX': [False, True, False, True],
        'FactorY': [False, False, True, True]
    })

    pred = eq.predict(df)       # We provide the dataframe as an argument to the LinearModel's predict method.
    
    # The result will be as expected.
    assert all(pred.values == [
        baseline_prob,
        (odds * OR_X) / (1 + odds * OR_X),
        (odds * OR_Y) / (1 + odds * OR_Y),
        (odds * OR_X * OR_Y) / (1 + odds * OR_X * OR_Y)
    ])

Note that:

  • All models must have intercept -- this is the value this is given to all individuals / rows in the dataframe. If there are no predictors then this is the final value for each individual / row in the ADDITIVE and MULTIPLICATIVE model types, and this value will be transformed as x/(1+x) for the final value value in the LOGISTIC model type (see below for more information on this).
  • The model can have no predictors -- in which case, see above! This 'hollow' mode is helpful when sketching out the logic of a module without specifying the details of how individuals' characteristics might affect things.
  • The model can have any number of predictors --- and each will be applied independently and then combined according to the model type (see below).

Calling a LinearModel

Pass in a pd.DataFrame as an argument to the LinearModel's .predict() method. Note that the dataframe must contain columns that are named exactly as any (non-external) Predictor name. It will return a series with an index equal to index of the dataframe that is passed Thus: result = my_lm.predict(df)

Common use cases would be:

  • Passing in a subset of the main data frame:
result = my_lm.predict(df.loc[(df['is_alive']) & (df['age_years'] < 5)])
  • Getting the result for one person and extracting the prediction as a value (not a pd.Series):
result = my_lm.predict(df.at[[person_id]]).values[0]

Which LinearModelType do I want?

The ModelType determines how the effects of each Predictor are combined to give the final result. Let y be the output of the model, x_i be indicator variables for a particular condition, beta_0 be the intercept term and beta_i the 'effects' then:

  • The LinearModelType.ADDITIVE model will provide:

    y = beta_0 + beta_1 * x_1 + beta_2 * x_2 + ...

  • The LinearModelType.MULTIPLICATIVE model will provide:

    y = beta_0 * (beta_1 * x_1) * (beta_2 * x_2) * ...

  • The LinearModelType.LOGISTIC model is different as it is designed to accept the outputs of standard logistic regression, which are based on 'odds'. Recall that the relationship between odds and probabilities are as follows: odds = prob / (1 - prob); odds = prob / (1 + prob) The intercept is the baseline odds and effects are the Odds Ratio. The model returns the probability that is implied by the model.

Predictors

Predictors specify how a particular column in the pd.DataFrame should affect the result of the LinearModel. This is where the magic happens.

The following uses are supported:

  • A boolean property:
Predictor('property_name').when(True, value_for_true).otherwise(value_for_false)

Predictor('property_name').when(True, value_for_true)   # If false for a row, there will no effect of this predictor for that row
  • A property for which particular values are important:
Predictor('property_name').when('string_condition', value_for_condition).otherwise(value_for_otherwise)

Predictor('property_name').when('string_condition_one', value_for_condition_one)
                          .when('string_condition_one', value_for_condition_two)
                          .otherwise(value_for_otherwise)
  • A property for which particular range are to be selected NB. .between is an inclusive range, such that .between(a,b) is True for values a and b and all values in-between.
Predictor('property_name').when(.between(low_value, high_value), value_for_condition).otherwise(value_for_otherwise)

An alternative specification uses inequalities. Note that successive .when() conditions are not evaluated for a row if a prior .when() condition has been satisfied.

Predictor('property_name').when('<10', value_for_less_than_ten)
                          .when('<20', value_for_10_to_20)
                          .when('<30', value_for_20_to_30)
  • A condition that is based on more than one variable Here the Predictor is defined with an empty set of parentheses and the condition is given in the .when() as a string verbatim as it should be evaluated; the value for when is assigned to the row when that condition is True.
Predictor().when(string_of_compound_condition_that_will_yield_bool, value_for_compound_condition_true)

For example:

Predictor().when('(property_name_1 == True) & (property_name_2 == "M")', value_for_compound_condition_true)
  • A condition that is a function of the value of the Predictor The value of the model may depend on a function of the predictor. For example, in the model y = k1 + k2*x^2, where k1 and k2 are constants. In this case, we provide the function -- either as a defined function or a lambda function - to the .apply() method of the Predictor.
Predictor('property_name').apply(lambda x: (k1 + k2*(x**2)))
  • External Variables An external variable is one that is not in the dataframe - for example year or some 'environmental' level variable that is the same for all individuals at the time that the model is used to predict.
Predictor('external_variable_name', external=True).when(2010, 5.0).otherwise(1.0)

If you have specified an 'external' variable, you must pass it as an argument to the predict method.

        my_lm = LinearModel(0.0, 
			    Predictor('external_variable_name', external=True).when(2010, 5.0).otherwise(1.0)
                           )
        result = my_lm.predict(df, year = self.sim.date.year)

Important Notes

  • Any number of .when() conditions can be specified but the order is important: for one row / individual, successive .when() conditions are not evaluated if a previous .when() condition has been satisfied. This allows simple specification of mapping of an effect to different contiguous ranges (see example above)

  • An .otherwise() condition is not needed. If one is not specified and the no other condition is met for that row, then the predictor has no effect. That is, there is an implicit .otherwise(0.0) for ADDITIVE models, an implicit .otherwise(1.0) for MULTIPLICATIVE models, and an implicit .otherwise() condition of Odds Ratio = 1.0 for LOGISTIC models.

Caveats

  • Using the external=True flag on a Predictor currently has a performance penalty, which may be significant with a large population. We are currently monitoring real-world usage to determine how to modify the code to avoid this.
Clone this wiki locally