Skip to content

Commit

Permalink
Merge pull request #321 from py-why/speedup_info
Browse files Browse the repository at this point in the history
add new methods to the notebooks and readme
AlxdrPolyakov authored Sep 12, 2024

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature. The key has expired.
2 parents 98d92ca + f533461 commit d2c65f8
Showing 7 changed files with 1,662 additions and 486 deletions.
4 changes: 4 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
@@ -190,6 +190,10 @@ print(f"Best estimator: {ct.best_estimator}")
```

Now if ***outcome_model="auto"*** in the CausalTune constructor, we search over a simultaneous search space for the EconML estimators and for FLAML wrappers for common regressors. The old behavior is now achieved by ***outcome_model="nested"*** (Refitting AutoML for each estimator).

You can also preprocess the data in the CausalityDataset using one of the popular category encoders: ***OneHot, WoE, Label, Target***.

## Supported Models
The package supports the following causal estimators:
* Meta Learners:
29 changes: 29 additions & 0 deletions causaltune/dataset_processor.py
Original file line number Diff line number Diff line change
@@ -8,7 +8,18 @@


class CausalityDatasetProcessor(BaseEstimator, TransformerMixin):
"""
A processor for CausalityDataset, designed to preprocess data for causal inference tasks by encoding, normalizing,
and handling missing values.
Attributes:
encoder_type (str): Type of encoder used for categorical feature encoding ('onehot', 'label', 'target', 'woe').
outcome (str): The target variable used for encoding.
encoder: Encoder object used during feature transformations.
"""
def __init__(self):
"""
Initializes CausalityDatasetProcessor with default attributes for encoder_type, outcome, and encoder.
"""
self.encoder_type = None
self.outcome = None
self.encoder = None
@@ -19,13 +30,31 @@ def fit(
encoder_type: Optional[str] = "onehot",
outcome: str = None,
):
"""
Fits the processor by preprocessing the input CausalityDataset.
Args:
cd (CausalityDataset): The dataset for causal analysis.
encoder_type (str, optional): Encoder to use for categorical features. Default is 'onehot'.
outcome (str, optional): The target variable for encoding (needed for 'target' or 'woe'). Default is None.
Returns:
CausalityDatasetProcessor: The fitted processor instance.
"""
cd = copy.deepcopy(cd)
self.preprocess_dataset(
cd, encoder_type=encoder_type, outcome=outcome, fit_phase=True
)
return self

def transform(self, cd: CausalityDataset):
"""
Transforms the CausalityDataset using the fitted encoder.
Args:
cd (CausalityDataset): Dataset to transform.
Returns:
CausalityDataset: Transformed dataset.
Raises:
ValueError: If processor has not been trained yet.
"""
if self.encoder:
cd = self.preprocess_dataset(
cd,
18 changes: 3 additions & 15 deletions notebooks/AB testing.ipynb
Original file line number Diff line number Diff line change
@@ -1,7 +1,6 @@
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
@@ -49,7 +48,6 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
@@ -78,7 +76,6 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
@@ -97,23 +94,20 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Data Generating Process"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"We first create synthetic data from a DGP with perfect randomisation of the treatment as we are replicating an AB test environment"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
@@ -154,7 +148,6 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
@@ -179,7 +172,9 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Now if outcome_model=\"auto\" in the CausalTune constructor, we search over a simultaneous search space for the EconML estimators and for FLAML wrappers for common regressors. The old behavior is now achieved by outcome_model=\"nested\" (the default for now)"
"Now if `outcome_model=\"auto\"` in the CausalTune constructor, we search over a simultaneous search space for the EconML estimators and for FLAML wrappers for common regressors. The old behavior is now achieved by `outcome_model=\"nested\"` (Refitting AutoML for each estimator).\n",
"\n",
"You can also preprocess the data in the CausalityDataset using one of the popular category encoders: OneHot, WoE, Label, Target."
]
},
{
@@ -201,7 +196,6 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
@@ -230,7 +224,6 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
@@ -274,7 +267,6 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
@@ -356,21 +348,18 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": []
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"### 2. Segmentation with Wise Pizza"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
@@ -431,7 +420,6 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": []
1,366 changes: 1,286 additions & 80 deletions notebooks/CausalityDataset setup.ipynb

Large diffs are not rendered by default.

169 changes: 77 additions & 92 deletions notebooks/ERUPT under simulated random assignment.ipynb
Original file line number Diff line number Diff line change
@@ -1,7 +1,6 @@
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"id": "a34f30c6",
"metadata": {
@@ -15,18 +14,10 @@
},
{
"cell_type": "code",
"execution_count": 1,
"execution_count": 12,
"id": "c37a7a94",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"OMP: Info #276: omp_set_nested routine deprecated, please use omp_set_max_active_levels instead.\n"
]
}
],
"outputs": [],
"source": [
"%load_ext autoreload\n",
"%autoreload 2\n",
@@ -91,31 +82,14 @@
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": 13,
"id": "5ed9b5f7",
"metadata": {
"pycharm": {
"name": "#%%\n"
}
},
"outputs": [
{
"data": {
"application/javascript": [
"\n",
"// turn off scrollable windows for large output\n",
"IPython.OutputArea.prototype._should_scroll = function(lines) {\n",
" return false;\n",
"}\n"
],
"text/plain": [
"<IPython.core.display.Javascript object>"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"outputs": [],
"source": [
"%%javascript\n",
"\n",
@@ -126,7 +100,6 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "af5333b0",
"metadata": {},
@@ -188,81 +161,81 @@
" <tr>\n",
" <th>0</th>\n",
" <td>0</td>\n",
" <td>1.239308</td>\n",
" <td>1.0</td>\n",
" <td>-0.847134</td>\n",
" <td>-0.398563</td>\n",
" <td>0.176539</td>\n",
" <td>0.957360</td>\n",
" <td>1.122457</td>\n",
" <td>0.328241</td>\n",
" <td>-0.529094</td>\n",
" <td>0.0</td>\n",
" <td>-0.325404</td>\n",
" <td>-3.200259</td>\n",
" <td>-1.096231</td>\n",
" <td>0.454945</td>\n",
" <td>-0.682950</td>\n",
" <td>0.096673</td>\n",
" </tr>\n",
" <tr>\n",
" <th>1</th>\n",
" <td>0</td>\n",
" <td>0.108442</td>\n",
" <td>0.0</td>\n",
" <td>-0.583898</td>\n",
" <td>-0.899265</td>\n",
" <td>1.177333</td>\n",
" <td>-0.563962</td>\n",
" <td>-0.614737</td>\n",
" <td>0.195308</td>\n",
" <td>-2.673912</td>\n",
" <td>1.0</td>\n",
" <td>-2.224641</td>\n",
" <td>1.384133</td>\n",
" <td>0.506485</td>\n",
" <td>0.145684</td>\n",
" <td>-0.195266</td>\n",
" <td>0.472952</td>\n",
" </tr>\n",
" <tr>\n",
" <th>2</th>\n",
" <td>1</td>\n",
" <td>-0.897310</td>\n",
" <td>-1.666444</td>\n",
" <td>0.0</td>\n",
" <td>-2.237590</td>\n",
" <td>0.061438</td>\n",
" <td>-0.462519</td>\n",
" <td>0.777278</td>\n",
" <td>-1.379022</td>\n",
" <td>0.345805</td>\n",
" <td>0.687121</td>\n",
" <td>-0.207614</td>\n",
" <td>0.788699</td>\n",
" <td>1.131345</td>\n",
" <td>-0.352091</td>\n",
" <td>0.550413</td>\n",
" </tr>\n",
" <tr>\n",
" <th>3</th>\n",
" <td>1</td>\n",
" <td>0.757475</td>\n",
" <td>1.0</td>\n",
" <td>-0.047319</td>\n",
" <td>0.354603</td>\n",
" <td>-1.976429</td>\n",
" <td>0.081945</td>\n",
" <td>0.424041</td>\n",
" <td>0.695707</td>\n",
" <td>0</td>\n",
" <td>-1.619143</td>\n",
" <td>0.0</td>\n",
" <td>0.740413</td>\n",
" <td>-0.666263</td>\n",
" <td>1.027818</td>\n",
" <td>-0.197965</td>\n",
" <td>-2.025220</td>\n",
" <td>0.423549</td>\n",
" </tr>\n",
" <tr>\n",
" <th>4</th>\n",
" <td>0</td>\n",
" <td>0.853478</td>\n",
" <td>0.331106</td>\n",
" <td>1.0</td>\n",
" <td>-0.256832</td>\n",
" <td>0.048748</td>\n",
" <td>1.536085</td>\n",
" <td>-1.027415</td>\n",
" <td>0.689733</td>\n",
" <td>0.304767</td>\n",
" <td>-0.907719</td>\n",
" <td>-1.775581</td>\n",
" <td>0.072270</td>\n",
" <td>-1.760379</td>\n",
" <td>1.449668</td>\n",
" <td>0.083704</td>\n",
" </tr>\n",
" </tbody>\n",
"</table>\n",
"</div>"
],
"text/plain": [
" T Y random X1 X2 X3 X4 X5 \\\n",
"0 0 1.239308 1.0 -0.847134 -0.398563 0.176539 0.957360 1.122457 \n",
"1 0 0.108442 0.0 -0.583898 -0.899265 1.177333 -0.563962 -0.614737 \n",
"2 1 -0.897310 0.0 -2.237590 0.061438 -0.462519 0.777278 -1.379022 \n",
"3 1 0.757475 1.0 -0.047319 0.354603 -1.976429 0.081945 0.424041 \n",
"4 0 0.853478 1.0 -0.256832 0.048748 1.536085 -1.027415 0.689733 \n",
"0 0 -0.529094 0.0 -0.325404 -3.200259 -1.096231 0.454945 -0.682950 \n",
"1 0 -2.673912 1.0 -2.224641 1.384133 0.506485 0.145684 -0.195266 \n",
"2 1 -1.666444 0.0 0.687121 -0.207614 0.788699 1.131345 -0.352091 \n",
"3 0 -1.619143 0.0 0.740413 -0.666263 1.027818 -0.197965 -2.025220 \n",
"4 0 0.331106 1.0 -0.907719 -1.775581 0.072270 -1.760379 1.449668 \n",
"\n",
" propensity \n",
"0 0.328241 \n",
"1 0.195308 \n",
"2 0.345805 \n",
"3 0.695707 \n",
"4 0.304767 "
"0 0.096673 \n",
"1 0.472952 \n",
"2 0.550413 \n",
"3 0.423549 \n",
"4 0.083704 "
]
},
"metadata": {},
@@ -299,15 +272,30 @@
"id": "33681e65-6dd4-4c7d-a62d-925572b39e81",
"metadata": {},
"source": [
"Now if outcome_model=\"auto\" in the CausalTune constructor, we search over a simultaneous search space for the EconML estimators and for FLAML wrappers for common regressors. The old behavior is now achieved by outcome_model=\"nested\" (the default for now)"
"Now if `outcome_model=\"auto\"` in the CausalTune constructor, we search over a simultaneous search space for the EconML estimators and for FLAML wrappers for common regressors. The old behavior is now achieved by `outcome_model=\"nested\"` (Refitting AutoML for each estimator).\n",
"\n",
"You can also preprocess the data in the CausalityDataset using one of the popular category encoders: OneHot, WoE, Label, Target."
]
},
{
"cell_type": "code",
"execution_count": 12,
"execution_count": 7,
"id": "a51c87f4",
"metadata": {},
"outputs": [],
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Fitting a Propensity-Weighted scoring estimator to be used in scoring tasks\n",
"Propensity Model Fitted Successfully\n",
"---------------------\n",
"Best estimator: backdoor.econml.dml.CausalForestDML\n",
"Best config: {'estimator': {'estimator_name': 'backdoor.econml.dml.CausalForestDML', 'drate': 1, 'n_estimators': 2, 'criterion': 'het', 'min_samples_split': 12, 'min_samples_leaf': 8, 'min_weight_fraction_leaf': 0.0, 'max_features': 'log2', 'min_impurity_decrease': 0, 'max_samples': 0.2884902061383809, 'min_balancedness_tol': 0.4585520111743354, 'honest': 1, 'fit_intercept': 1, 'subforest_size': 5}, 'outcome_estimator': {'alpha': 0.006205274971406812, 'fit_intercept': True, 'eps': 7.833744321548246e-15, 'estimator_name': 'lasso_lars'}}\n",
"Best score: 0.2952285030581425\n"
]
}
],
"source": [
"ct = CausalTune(\n",
" estimator_list=[\"CausalForestDML\", \"XLearner\"],\n",
@@ -334,7 +322,6 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "19bcfc2e",
"metadata": {},
@@ -343,7 +330,6 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "2bea4e38",
"metadata": {},
@@ -438,22 +424,22 @@
" <tbody>\n",
" <tr>\n",
" <th>naive_ate</th>\n",
" <td>0.218151</td>\n",
" <td>0.124848</td>\n",
" <td>0.030740</td>\n",
" <td>0.139801</td>\n",
" </tr>\n",
" <tr>\n",
" <th>random_erupt</th>\n",
" <td>0.023141</td>\n",
" <td>0.216845</td>\n",
" <td>-0.001059</td>\n",
" <td>0.210618</td>\n",
" </tr>\n",
" </tbody>\n",
"</table>\n",
"</div>"
],
"text/plain": [
" estimated_effect sd\n",
"naive_ate 0.218151 0.124848\n",
"random_erupt 0.023141 0.216845"
"naive_ate 0.030740 0.139801\n",
"random_erupt -0.001059 0.210618"
]
},
"metadata": {},
@@ -467,7 +453,6 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "a54530bf",
"metadata": {},
419 changes: 194 additions & 225 deletions notebooks/Multiple treatments examples.ipynb

Large diffs are not rendered by default.

143 changes: 69 additions & 74 deletions notebooks/Standard errors.ipynb
Original file line number Diff line number Diff line change
@@ -1,7 +1,6 @@
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"id": "a34f30c6",
"metadata": {
@@ -17,7 +16,7 @@
},
{
"cell_type": "code",
"execution_count": 1,
"execution_count": 12,
"id": "43b770ca",
"metadata": {},
"outputs": [],
@@ -81,25 +80,14 @@
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": 13,
"id": "5ed9b5f7",
"metadata": {
"pycharm": {
"name": "#%%\n"
}
},
"outputs": [
{
"data": {
"application/javascript": "\n// turn off scrollable windows for large output\nIPython.OutputArea.prototype._should_scroll = function(lines) {\n return false;\n}\n",
"text/plain": [
"<IPython.core.display.Javascript object>"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"outputs": [],
"source": [
"%%javascript\n",
"\n",
@@ -125,7 +113,6 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "ab536d1b",
"metadata": {},
@@ -252,7 +239,7 @@
" <th>2</th>\n",
" <td>0</td>\n",
" <td>2.996273</td>\n",
" <td>1.0</td>\n",
" <td>0.0</td>\n",
" <td>-0.807451</td>\n",
" <td>-0.202946</td>\n",
" <td>-0.360898</td>\n",
@@ -276,7 +263,7 @@
" <th>3</th>\n",
" <td>0</td>\n",
" <td>1.366206</td>\n",
" <td>0.0</td>\n",
" <td>1.0</td>\n",
" <td>0.390083</td>\n",
" <td>0.596582</td>\n",
" <td>-1.850350</td>\n",
@@ -300,7 +287,7 @@
" <th>4</th>\n",
" <td>0</td>\n",
" <td>1.963538</td>\n",
" <td>1.0</td>\n",
" <td>0.0</td>\n",
" <td>-1.045228</td>\n",
" <td>-0.602710</td>\n",
" <td>0.011465</td>\n",
@@ -329,9 +316,9 @@
" treatment y_factual random x1 x2 x3 x4 \\\n",
"0 1 5.599916 1.0 -0.528603 -0.343455 1.128554 0.161703 \n",
"1 0 6.875856 1.0 -1.736945 -1.802002 0.383828 2.244319 \n",
"2 0 2.996273 1.0 -0.807451 -0.202946 -0.360898 -0.879606 \n",
"3 0 1.366206 0.0 0.390083 0.596582 -1.850350 -0.879606 \n",
"4 0 1.963538 1.0 -1.045228 -0.602710 0.011465 0.161703 \n",
"2 0 2.996273 0.0 -0.807451 -0.202946 -0.360898 -0.879606 \n",
"3 0 1.366206 1.0 0.390083 0.596582 -1.850350 -0.879606 \n",
"4 0 1.963538 0.0 -1.045228 -0.602710 0.011465 0.161703 \n",
"\n",
" x5 x6 x7 ... x16 x17 x18 x19 x20 x21 x22 x23 x24 \\\n",
"0 -0.316603 1.295216 1.0 ... 1.0 1.0 1.0 1.0 0.0 0.0 0.0 0.0 0.0 \n",
@@ -360,7 +347,6 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "d4d1871f",
"metadata": {},
@@ -390,20 +376,22 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "e0f63d12",
"metadata": {},
"source": [
"Note that in the example below, we are passing `'cheap_inference'` to `estimator_list`. This configuration will restrict the selection of estimators to the ones that have analytical standard errors."
"Note that in the example below, we are passing `'cheap_inference'` to `estimator_list`. This configuration will restrict the selection of estimators to the ones that have analytical standard errors.\n",
"\n",
"Now if `outcome_model=\"auto\"` in the CausalTune constructor, we search over a simultaneous search space for the EconML estimators and for FLAML wrappers for common regressors. The old behavior is now achieved by `outcome_model=\"nested\"` (Refitting AutoML for each estimator).\n",
"\n",
"You can also preprocess the data in the CausalityDataset using one of the popular category encoders: OneHot, WoE, Label, Target."
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "097c923e",
"metadata": {
"collapsed": false,
"pycharm": {
"name": "#%%\n"
}
@@ -414,11 +402,11 @@
"output_type": "stream",
"text": [
"Fitting a Propensity-Weighted scoring estimator to be used in scoring tasks\n",
"Initial configs: [{'estimator': {'estimator_name': 'backdoor.econml.dr.ForestDRLearner', 'min_propensity': 1e-06, 'n_estimators': 100, 'min_samples_split': 5, 'min_samples_leaf': 5, 'min_weight_fraction_leaf': 0.0, 'max_features': 'auto', 'min_impurity_decrease': 0.0, 'max_samples': 0.45, 'min_balancedness_tol': 0.45, 'honest': True, 'subforest_size': 4}}, {'estimator': {'estimator_name': 'backdoor.econml.dr.LinearDRLearner', 'fit_cate_intercept': True, 'min_propensity': 1e-06}}, {'estimator': {'estimator_name': 'backdoor.econml.dr.SparseLinearDRLearner', 'fit_cate_intercept': True, 'n_alphas': 100, 'n_alphas_cov': 10, 'min_propensity': 1e-06, 'tol': 0.0001, 'max_iter': 10000, 'mc_agg': 'mean'}}, {'estimator': {'estimator_name': 'backdoor.econml.dml.LinearDML', 'fit_cate_intercept': True, 'mc_agg': 'mean'}}, {'estimator': {'estimator_name': 'backdoor.econml.dml.SparseLinearDML', 'fit_cate_intercept': True, 'n_alphas': 100, 'n_alphas_cov': 10, 'tol': 0.0001, 'max_iter': 10000, 'mc_agg': 'mean'}}, {'estimator': {'estimator_name': 'backdoor.econml.dml.CausalForestDML', 'drate': True, 'n_estimators': 100, 'criterion': 'mse', 'min_samples_split': 10, 'min_samples_leaf': 5, 'min_weight_fraction_leaf': 0.0, 'max_features': 'auto', 'min_impurity_decrease': 0.0, 'max_samples': 0.45, 'min_balancedness_tol': 0.45, 'honest': True, 'fit_intercept': True, 'subforest_size': 4}}]\n",
"Propensity Model Fitted Successfully\n",
"---------------------\n",
"Best estimator: backdoor.econml.dr.ForestDRLearner\n",
"Best config: {'estimator': {'estimator_name': 'backdoor.econml.dr.ForestDRLearner', 'min_propensity': 1e-06, 'n_estimators': 100, 'min_samples_split': 5, 'min_samples_leaf': 5, 'min_weight_fraction_leaf': 0.0, 'max_features': 'auto', 'min_impurity_decrease': 0.0, 'max_samples': 0.45, 'min_balancedness_tol': 0.45, 'honest': 1, 'subforest_size': 4}}\n",
"Best score: 0.28241795991132435\n"
"Best config: {'estimator': {'estimator_name': 'backdoor.econml.dr.ForestDRLearner', 'min_propensity': 4.1309041114224745e-06, 'n_estimators': 51, 'min_samples_split': 2, 'min_samples_leaf': 5, 'min_weight_fraction_leaf': 0.0, 'max_features': 'log2', 'min_impurity_decrease': 0, 'max_samples': 0.4714678358460523, 'min_balancedness_tol': 0.48107268073765275, 'honest': 1, 'subforest_size': 5}, 'outcome_estimator': {'alpha': 0.0680343251051132, 'fit_intercept': True, 'eps': 3.581001561497127e-16, 'estimator_name': 'lasso_lars'}}\n",
"Best score: 0.19782534210362535\n"
]
}
],
@@ -430,7 +418,8 @@
" components_verbose=0,\n",
" time_budget=time_budget,\n",
" components_time_budget=components_time_budget,\n",
" train_size=train_size\n",
" train_size=train_size,\n",
" outcome_model=\"auto\"\n",
")\n",
"\n",
"\n",
@@ -455,11 +444,11 @@
{
"data": {
"text/plain": [
"array([[3.08417039],\n",
" [4.10807041],\n",
" [4.32885751],\n",
" [4.53901377],\n",
" [4.19668172]])"
"array([[3.06847504],\n",
" [5.10172326],\n",
" [2.3049086 ],\n",
" [4.39115942],\n",
" [4.38397264]])"
]
},
"metadata": {},
@@ -476,7 +465,6 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "8c819410",
"metadata": {},
@@ -495,11 +483,11 @@
{
"data": {
"text/plain": [
"array([[0.28758771],\n",
" [0.2267228 ],\n",
" [0.29267037],\n",
" [0.22686985],\n",
" [0.28054057]])"
"array([[0.74527346],\n",
" [0.76067972],\n",
" [0.48614067],\n",
" [0.42494167],\n",
" [0.52123297]])"
]
},
"metadata": {},
@@ -513,7 +501,6 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "9a474ab5",
"metadata": {},
@@ -568,48 +555,48 @@
" <tbody>\n",
" <tr>\n",
" <th>0</th>\n",
" <td>3.084</td>\n",
" <td>0.288</td>\n",
" <td>10.724</td>\n",
" <td>3.068</td>\n",
" <td>0.745</td>\n",
" <td>4.117</td>\n",
" <td>0.0</td>\n",
" <td>2.611</td>\n",
" <td>3.557</td>\n",
" <td>1.843</td>\n",
" <td>4.294</td>\n",
" </tr>\n",
" <tr>\n",
" <th>1</th>\n",
" <td>4.108</td>\n",
" <td>0.227</td>\n",
" <td>18.119</td>\n",
" <td>5.102</td>\n",
" <td>0.761</td>\n",
" <td>6.707</td>\n",
" <td>0.0</td>\n",
" <td>3.735</td>\n",
" <td>4.481</td>\n",
" <td>3.851</td>\n",
" <td>6.353</td>\n",
" </tr>\n",
" <tr>\n",
" <th>2</th>\n",
" <td>4.329</td>\n",
" <td>0.293</td>\n",
" <td>14.791</td>\n",
" <td>2.305</td>\n",
" <td>0.486</td>\n",
" <td>4.741</td>\n",
" <td>0.0</td>\n",
" <td>3.847</td>\n",
" <td>4.810</td>\n",
" <td>1.505</td>\n",
" <td>3.105</td>\n",
" </tr>\n",
" <tr>\n",
" <th>3</th>\n",
" <td>4.539</td>\n",
" <td>0.227</td>\n",
" <td>20.007</td>\n",
" <td>4.391</td>\n",
" <td>0.425</td>\n",
" <td>10.334</td>\n",
" <td>0.0</td>\n",
" <td>4.166</td>\n",
" <td>4.912</td>\n",
" <td>3.692</td>\n",
" <td>5.090</td>\n",
" </tr>\n",
" <tr>\n",
" <th>4</th>\n",
" <td>4.197</td>\n",
" <td>0.281</td>\n",
" <td>14.959</td>\n",
" <td>4.384</td>\n",
" <td>0.521</td>\n",
" <td>8.411</td>\n",
" <td>0.0</td>\n",
" <td>3.735</td>\n",
" <td>4.658</td>\n",
" <td>3.527</td>\n",
" <td>5.241</td>\n",
" </tr>\n",
" </tbody>\n",
"</table>\n",
@@ -618,11 +605,11 @@
"text/plain": [
" point_estimate stderr zstat pvalue ci_lower ci_upper\n",
"X \n",
"0 3.084 0.288 10.724 0.0 2.611 3.557\n",
"1 4.108 0.227 18.119 0.0 3.735 4.481\n",
"2 4.329 0.293 14.791 0.0 3.847 4.810\n",
"3 4.539 0.227 20.007 0.0 4.166 4.912\n",
"4 4.197 0.281 14.959 0.0 3.735 4.658"
"0 3.068 0.745 4.117 0.0 1.843 4.294\n",
"1 5.102 0.761 6.707 0.0 3.851 6.353\n",
"2 2.305 0.486 4.741 0.0 1.505 3.105\n",
"3 4.391 0.425 10.334 0.0 3.692 5.090\n",
"4 4.384 0.521 8.411 0.0 3.527 5.241"
]
},
"execution_count": 11,
@@ -633,11 +620,19 @@
"source": [
"ct.effect_inference(test_df)[0].summary_frame(alpha=0.1, value=0, decimals=3).head()"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "2b200c45-d652-42a8-b8f1-611a119143c3",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "causality",
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
@@ -651,7 +646,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.16"
"version": "3.10.14"
}
},
"nbformat": 4,

0 comments on commit d2c65f8

Please sign in to comment.