-
Notifications
You must be signed in to change notification settings - Fork 7
Cookbook
Add requests in this document
- Date arithmetic
- Events
- Understanding assignment by index or row offset
- Assign values to population with specified probability
- Transitioning between multiple states based on probability for each transition
- Creating and extending dataframes
- Plotting maps
- Logging
Pandas timeseries documentation
Dates should be 'TimeStamp' objects and intervals should be 'Timedelta' objects.
Note: Pandas does not know how to handle partial years and months. Convert the interval to days e.g.
# pandas can handle partial days
>>> pd.to_timedelta([0.25, 0.5, 1, 1.5, 2], unit='d')
TimedeltaIndex(['0 days 06:00:00', '0 days 12:00:00', '1 days 00:00:00',
'1 days 12:00:00', '2 days 00:00:00'],
dtype='timedelta64[ns]', freq=None)
# pandas cannot handle partial months
>>> pd.to_timedelta([0.25, 0.5, 1, 1.5, 2], unit='M')
TimedeltaIndex([ '0 days 00:00:00', '0 days 00:00:00', '30 days 10:29:06',
'30 days 10:29:06', '60 days 20:58:12'],
dtype='timedelta64[ns]', freq=None)
# pandas cannot handle partial years
>>> pd.to_timedelta([0.25, 0.5, 1, 1.5, 2], unit='Y')
TimedeltaIndex([ '0 days 00:00:00', '0 days 00:00:00', '365 days 05:49:12',
'365 days 05:49:12', '730 days 11:38:24'],
dtype='timedelta64[ns]', freq=None)
The way to handle this is to multiply by average number of days in months or year. For example
partial_interval = pd.Series([0.25, 0.5, 1, 1.5, 2])
# we want timedelta for 0.25, 0.5, 1, 1.5 etc months, we need to convert to days
interval = pd.to_timedelta(partial_interval * 30.44, unit='d')
print(interval)
TimedeltaIndex([ '7 days 14:38:24', '15 days 05:16:48', '30 days 10:33:36',
'45 days 15:50:24', '60 days 21:07:12'],
dtype='timedelta64[ns]', freq=None)
# we want timedelta for 0.25, 0.5, 1, 1.5 etc years, we need to convert to days
interval = pd.to_timedelta(partial_interval * 365.25, unit='d')
print(interval)
TimedeltaIndex([ '91 days 07:30:00', '182 days 15:00:00', '365 days 06:00:00',
'547 days 21:00:00', '730 days 12:00:00'],
dtype='timedelta64[ns]', freq=None)
current_date = self.sim.date
# sample a list of numbers from an exponential distribution
# (remember to use self.rng in TLO code)
random_draw = np.random.exponential(scale=5, size=10)
# convert these numbers into years
# valid units are: [h]ours; [d]ays; [M]onths; [y]ears
# REMEMBER: Pandas cannot handle fractions of months or years
random_years = pd.to_timedelta(random_draw, unit='y')
# add to current date
future_dates = current_date + random_years
An event scheduled to run every day on a given person. Note the order of the mixin & superclass:
class MyRegularEventOnIndividual(IndividualScopeEventMixin, RegularEvent):
def __init__(self, module, person):
super().__init__(module=module, person=person, frequency=DateOffset(days=1))
def apply(self, person):
print('do something on person', person.index, 'on', self.sim.date)
Add to simulation e.g. in initialise_simulation()
:
sim.schedule_event(MyRegularEventOnIndividual(module=self, person=an_individual),
sim.date + DateOffset(days=1)
class ExampleEvent(RegularEvent, PopulationScopeEventMixin):
def __init__(self, module):
super().__init__(module, frequency=DateOffset(days=1))
def apply(self, population):
# this event doesn't run after 2030
if self.sim.date.year == 2030:
# the end date is today's date
self.end_date = self.sim.date
# exit the procedure
return
# code that does something for this event
print('do some work for this event')
When you assign a series/column of values from one dataframe/series to another dataframe/series, Pandas will by default honour the index on the collection. However, you can ignore the index by accessing the values directly. If you notice odd assignments in your properties, check whether you're assigning using index or values. Example (run in a Python console):
import pandas as pd
# create a dataframe with one column
df1 = pd.DataFrame({'column_1': range(0, 5)})
df1.index.name = 'df1_index'
print(df1)
# df1:
# column_1
# df1_index
# 0 0
# 1 1
# 2 2
# 3 3
# 4 4
df2 = pd.DataFrame({'column_2': range(10, 15)})
df2.index.name = 'df2_index'
df2 = df2.sort_values(by='column_2', ascending=False) # reverse the order of rows in df2
print(df2)
# notice the df2_index:
#
# column_2
# df2_index
# 4 14
# 3 13
# 2 12
# 1 11
# 0 10
# if we assign one column to another, Pandas will use the index to merge the columns
df1['df2_col2_use_index'] = df2['column_2']
# if we assign the column's values to another, Pandas will ignore the index
df1['df2_col2_use_row_offset'] = df2['column_2'].values
# note difference when assigning using index vs '.values'
print(df1)
# column_1 df2_col2_use_index df2_col2_use_row_offset
# df1_index
# 0 0 10 14
# 1 1 11 13
# 2 2 12 12
# 3 3 13 11
# 4 4 14 10
Assign True
to all individuals at probability p_true
(otherwise False
)
df = population.prop
random_draw = self.rng.random_sample(size=len(df)) # random sample for each person between 0 and 1
df['my_property'] = (p_true < random_draw)
or randomly sample a set of rows at the given probability:
df = population.prop
df['my_property'] = False
sampled_indices = np.random.choice(df.index.values, int(len(df) * p_true))
df.loc[sampled_indices, 'my_property'] = True
You can sample a proportion of the index and set those:
df = population.prop
df['my_property'] = False
df.loc[df.index.to_series().sample(frac=p_true).index, 'my_property'] = True
Imagine we have different rate of my_property
being true based on sex.
df = population.props
# create a dataframe to hold the probabilities (or read from an Excel workbook)
prob_by_sex = pd.DataFrame(data=[('M', 0.46), ('F', 0.62)], columns=['sex', 'p_true'])
# merge with the population dataframe
df_with_prob = df[['sex']].merge(prob_by_sex, left_on=['sex'], right_on=['sex'], how='left')
# randomly sample numbers between 0 and 1
random_draw = self.rng.random_sample(size=len(df))
# assign true or false based on draw and individual's p_true
df['my_property'] = (df_with_prob.p_true.values < random_draw)
df = population.props
# get the categories and probabilities (read from Excel file/in the code etc)
categories = [1, 2, 3, 4] # or categories = ['A', 'B', 'C', 'D']
probabilities = [0.1, 0.2, 0.3, 0.4]
random_choice = self.rng.choice(categories, size=len(df), p=probabilities)
# if 'categories' should be treated as a plain old number or string
df['my_category'] = random_choice
# else if 'categories' should be treated as a real Pandas Categorical
# i.e. property was set up using Types.CATEGORICAL
df['my_category'].values[:] = random_choice
A utility function (transition_states
) can carry out all transitions based on probability matrix for each transition from one state to another.
# import the util module to be able to use the transition_states function
from tlo import util
# create a probability matrix, each original state's probabilities should sum to 1
disease_states = ['a', 'b', 'c', 'd'] # or disease_states = [1, 2, 3, 4]
prob_matrix = pd.DataFrame(columns=disease_states, index=disease_states)
# when writing the prob_matrix['a'] is the original state
# values in the list are probability for new states in the same order
# a b c d
prob_matrix['a'] = [0.9, 0.1, 0.0, 0.0]
prob_matrix['b'] = [0.2, 0.2, 0.6, 0.0]
prob_matrix['c'] = [0.0, 0.2, 0.6, 0.2]
prob_matrix['d'] = [0.0, 0.0, 0.3, 0.7]
# when viewed, columns are the original state, rows/indexes are the new_states
prob_matrix
' | a | b | c | d |
---|---|---|---|---|
a | 0.9 | 0.2 | 0.0 | 0.0 |
b | 0.1 | 0.2 | 0.2 | 0.0 |
c | 0.0 | 0.6 | 0.6 | 0.3 |
d | 0.0 | 0.0 | 0.2 | 0.7 |
df = population.props
# States can only change if the individual is alive and is over 16
changeable_states = df.loc[df.is_alive & (df.age_years > 16), 'disease_state']
# transition the changeable states based on the probability matrix, passing in the rng
new_states = util.transition_states(changeable_states, prob_matrix, self.rng)
# update the DataFrame with the new states
df.disease_state.update(new_states)
In most cases, dataframes are created from the resource files. This section is for times when you need to build dataframes from scratch, add on several columns or merge dataframes together.
The most efficient way to build a pandas dataframe is to pass the entire
dataset in one step. If you have to calculate a column at a time (in a for loop for example),
then avoid using pd.concat([original_df, new column], axis=1)
.
Instead use the method below, calculate each of the columns sequentially and tranform into a dataframe after the entire dataset is built.
# create empty dictionary of {'column_name': column_data}, then fill it with all data
df_data = {}
for col in ['A', 'B', 'C']:
column_name = f'column_{col}'
column_data = function_that_returns_list_of_data(col)
df_data[column_name] = column_data
# convert dictionary into pandas dataframe
pd.DataFrame(data=df_data, index=index) # the index here can be left out if not relevant
If you want to join two dataframes together and you know that there are no repeated values in the join-column then it is more efficient to join on indexes rather than named columns.
# original_df already has the index that we want so we don't need to reindex
original_df
# we want to join on 'joining_column' so we make this the index of merging_df
merging_df.set_index('joining_column', inplace=True)
# now we can join the indexes of both dataframes
altered_df = original_df.merge(merging_df, left_index=True, right_index=True, how='left')
An example analysis script which plots maps is in src/scripts/consumable_resource_analyses/map_plotting_example.py
Maps are plotted using shape files, files that represent geographic areas with a .shp file extension. There are three different shape files that correspond to the boundaries of different administrative levels (these files come from the Humanitarian Data Exchange project https://data.humdata.org/dataset/malawi-administrative-level-0-3-boundaries).
- mwi_admbnda_adm0_nso_20181016.shp -> country boundary
- mwi_admbnda_adm1_nso_20181016.shp -> region boundaries
- mwi_admbnda_adm2_nso_20181016.shp -> district boundaries
To read in the shape files you'll need to use the pyshp package (https://pypi.org/project/pyshp/), install this package using an install command in the PyCharm Terminal. Click on the Terminal
button in PyCharm and enter the following command:
pip install pyshp
Import the python package for reading in the shape files into your code:
import shapefile as shp
Read in the shape file corresponding to the administrative boundaries you want to plot:
sf = shp.Reader('resources/ResourceFile_mwi_admbnda_adm0_nso_20181016.shp')
You'll need to have the matplotlib package installed to plot the shape file. To install matplotlib click on the Terminal
button in PyCharm and enter the following command:
pip install matplotlib
Import matplotlib, by convention matplotlib.pyplot is given the alias plt:
import matplotlib.pyplot as plt
Open a new figure and plot the shape file with the following code:
plt.figure() # open a new figure
# loops through each part of the file (regions, districts etc.) and plots the boundary
for shape in sf.shapeRecords():
for i in range(len(shape.shape.parts)):
i_start = shape.shape.parts[i]
if i == len(shape.shape.parts) - 1:
i_end = len(shape.shape.points)
else:
i_end = shape.shape.parts[i + 1]
x = [i[0] for i in shape.shape.points[i_start:i_end]]
y = [i[1] for i in shape.shape.points[i_start:i_end]]
plt.plot(x, y, color='k', linewidth=0.1) # color='k' sets the boundary line colour to black
plt.axis('off') # remove the axes
plt.gca().set_aspect('equal', adjustable='box') # set the aspect ratio to even to avoid stretching the map
To plot points on the map with your data you will need data with grid coordinates. For example if the example data is stored in a data frame named example_df with columns named Eastings and Northings, corresponding to the x an y coordinates of the data, then we would plot the grid coordinates using the following code:
eastings = example_df['Eastings'] # x grid coordinate
northings = example_df['Northings'] # y grid coordinate
plt.scatter(eastings, northings, c='b', s=4) # c sets the marker colour and s sets the marker size
For more options on marker colour, marker size and marker style see the following matplotlib documentation: https://matplotlib.org/3.1.1/api/_as_gen/matplotlib.pyplot.scatter.html
If your data has both grid coordinates and a numerical value, the numerical value can be represented on plot using a colour map. For example, if the example data is a data frame of paracetamol stock out days named paracetamol_df with columns named Eastings, Northings, and Stock Out Days we would plot the numerical values of the stock out days with a colour map using the following code:
stock_out_days = paracetamol_df['Stock Out Days']
eastings = paracetamol_df['Eastings']
northings = paracetamol_df['Northings']
cm = plt.cm.get_cmap('Purples') # 'Purples' is one of matplotlib's sequetial colour maps
sc = plt.scatter(eastings, northings, c=stock_out_days, cmap=cm, s=4) # cmap sets to colour map
# fraction and pad set the size and position of the colour bar
plt.colorbar(sc, fraction=0.01, pad=0.01, label="Stock Out Days")
For more colour map options see the following matplotlib documentation: https://matplotlib.org/examples/color/colormaps_reference.html
You can save your figure and output it to PyCharm's Plots window in the SciView tab using the following code:
# bbox_inches="tight" removes white space and dpi sets the resolution of the figure
plt.savefig('my_map_plot.png', bbox_inches="tight", dpi=600)
plt.show() # this outputs the figure to PyCharm's Plots window
import pandas as pd
import matplotlib.pyplot as plt
import shapefile as shp
# read in the data frame and shape file
paracetamol_df = pd.read_csv('paracetamol_df.csv')
sf = shp.Reader("mwi_admbnda_adm2_nso_20181016.shp")
# select the data and grid coordinates from the data frame
stock_out_days = paracetamol_df['Stock Out Days']
eastings = paracetamol_df['Eastings']
northings = paracetamol_df['Northings']
# create a figure
plt.figure()
# loop through the parts in the shape file
for shape in sf.shapeRecords():
for i in range(len(shape.shape.parts)):
i_start = shape.shape.parts[i]
if i == len(shape.shape.parts) - 1:
i_end = len(shape.shape.points)
else:
i_end = shape.shape.parts[i + 1]
x = [i[0] for i in shape.shape.points[i_start:i_end]]
y = [i[1] for i in shape.shape.points[i_start:i_end]]
plt.plot(x, y, color='k', linewidth=0.1)
# remove figure axes and set the aspect ratio to equal so that the map isn't stretched
plt.axis('off')
plt.gca().set_aspect('equal')
# plot the data using a purple colour map
cm = plt.cm.get_cmap('Purples')
sc = plt.scatter(eastings, northings, c=stock_out_days, cmap=cm, s=4)
plt.colorbar(sc, fraction=0.01, pad=0.01, label="Stock Out Days")
# give the figure a title
plt.title("Paracetamol 500mg, tablets")
# save the figure
plt.savefig('plot_map_paracetamol_stock_out_days.png', bbox_inches="tight", dpi=600)
# display the figure in PyCharm's Plots window
plt.show()
This structure should be used for the analysis or test script. The sim.configure_logging
method does all configuration of the logger and takes in a logfile_prefix
. By default
this outputs a logfile in TLOmodel/outputs in the form "<logfile_prefix>__<YYYY-MM-DD_HH:MM:SS>.log".
This logfile path is returned for reading in the logfile later on.
# Establish the simulation object
start_date = Date(year=2010, month=1, day=1)
end_date = Date(year=2010, month=12, day=31)
popsize = 2000
sim = Simulation(start_date=start_date)
# Register the appropriate modules``
sim.register(demography.Demography(resourcefilepath=resourcefilepath))
sim.register(contraception.Contraception(resourcefilepath=resourcefilepath))
# configure logging after registering modules
logfile = sim.configure_logging(filename="LogFile")
# Run the simulation
sim.seed_rngs(0)
sim.make_initial_population(n=popsize)
sim.simulate(end_date=end_date)
# No filehandler flushing is required
# ...
When using logging within a disease module, you just need to make sure you're importing logging from tlo
from tlo import logging
# usage in the rest of the script is the same
The default logging directory is .outputs/
, this can be set using the directory argument:
custom_dir = "./path/to/custom/dir/"
# configure logging after registering modules
logfile = sim.configure_logging(filename="LogFile", directory=custom_dir)
In a test you would do this using the tmpdir
fixture like so:
def test_example(tmpdir):
# Establish the simulation object
sim = Simulation(start_date=start_date)
# ...
# Register modules
# ...
logfile = sim.configure_logging(filename="log", directory=tmpdir)
# Run the simulation
# ...
#
# read the results
logfile = parse_log_file(f)
You can pass in a dictionary of logging levels to the configure_logging()
method,
with '*' being a wildcard for all disease modules. Because this is evaluated
in the order that the dictionary is in, you can disable all disease modules,
then enable just the ones that you are interested in.
from tlo import logging
# ...
# Establish the simulation object
# ...
# Register modules
sim.register(demography.Demography(resourcefilepath=resourcefilepath))
sim.register(contraception.Contraception(resourcefilepath=resourcefilepath))
sim.register(enhanced_lifestyle.Lifestyle(resourcefilepath=resourcefilepath))
sim.register(healthsystem.HealthSystem(resourcefilepath=resourcefilepath,
service_availability=service_availability,
capabilities_coefficient=1.0,
mode_appt_constraints=2))
sim.register(symptommanager.SymptomManager(resourcefilepath=resourcefilepath))
sim.register(healthseekingbehaviour.HealthSeekingBehaviour())
sim.register(dx_algorithm_child.DxAlgorithmChild())
sim.register(mockitis.Mockitis())
sim.register(chronicsyndrome.ChronicSyndrome())logfile = sim.configure_logging("test_log", output_dir=tmpdir)
# Here, I only an interested in logging the output from mockitis and symptom manager:
custom_levels = {
'*': logging.CRITICAL, # disable logging for all modules
'tlo.methods.mockitis': logging.INFO, # enable logging at INFO level
'tlo.methods.symptommanager': logging.INFO, # enable logging at INFO level
}
# use the custom levels in the configuration of the logging
logfile = sim.configure_logging(filename="LogFile", custom_levels=custom_levels)
# Run the simulation
# ...
#
# read the results
logfile = parse_log_file(f)
TODO: Insert code to show how logging can be the cumulative since the last logging event
TODO: e.g debug to screen, info to file; nothing to screen, info to file
TLO Model Wiki