@@ -13,41 +13,51 @@ CurrentModule = CausalELM
13
13
14
14
# Overview
15
15
16
- CausalELM leverages new techniques in machine learning and statistics to estimate individual
17
- and aggregate treatment effects in situations where traditional methods are unsatisfactory
18
- or infeasible. To enable this, CausalELM provides a simple API to initialize a model,
19
- estimate a causal effect, get a summary of the model, and test its robustness. CausalELM
20
- includes estimators for interupted time series analysis, G-Computation, double machine
21
- learning, S-Learning, T-Learning, X-Learning, R-learning, and doubly robust estimation.
22
- Underlying all these estimators are bagged extreme learning machines. Extreme learning
23
- machines are a single layer feedfoward neural network that relies on randomized weights and
24
- least squares optimization, making them expressive, simple, and computationally
25
- efficient. Combining them with bagging reduces the variance caused by the randomization of
26
- weights and provides a form of regularization that does not have to be tuned through cross
27
- validation. These attributes make CausalELM a very simple and powerful package for
28
- estimating treatment effects.
16
+ CausalELM provides easy-to-use implementations of modern causal inference methods. While
17
+ CausalELM implements a variety of estimators, they all have one thing in common—the use of
18
+ machine learning models to flexibly estimate causal effects. This is where the ELM in
19
+ CausalELM comes from—the machine learning model underlying all the estimators is an extreme
20
+ learning machine (ELM). ELMs are a simple neural network that use randomized weights and
21
+ offer a good tradeoff between learning non-linear dependencies and simplicity. Furthermore,
22
+ CausalELM implements bagged ensembles of ELMs to reduce the variance resulting from
23
+ randomized weights.
29
24
30
- For a more interactive overview, see our JuliaCon 2024 talk[ here] ( https://www.youtube.com/watch?v=hh_cyj8feu8&t=26s )
25
+ ## Estimators
26
+ CausalELM implements estimators for aggreate e.g. average treatment effect (ATE) and
27
+ individualized e.g. conditional average treatment effect (CATE) quantities of interest.
28
+
29
+ ### Estimators for Aggregate Effects
30
+ * Interrupted Time Series Estimator
31
+ * G-computation
32
+ * Double machine Learning
31
33
32
- ### Features
34
+ ### Individualized Treatment Effect (CATE) Estimators
35
+ * S-learner
36
+ * T-learner
37
+ * X-learner
38
+ * R-learner
39
+ * Doubly Robust Estimator
40
+
41
+ ## Features
33
42
* Estimate a causal effect, get a summary, and validate assumptions in just four lines of code
34
- * Bagging improves performance and reduces variance without the need to tune a regularization parameter
35
43
* Enables using the same structs for regression and classification
36
44
* Includes 13 activation functions and allows user-defined activation functions
37
45
* Most inference and validation tests do not assume functional or distributional forms
38
46
* Implements the latest techniques from statistics, econometrics, and biostatistics
39
- * Works out of the box with arrays or any data structure that implements the Tables.jl interface
47
+ * Works out of the box with AbstractArrays or any data structure that implements the Tables.jl interface
48
+ * Works with CuArrays, ROCArrays, and any other GPU-specific arrays that are AbstractArrays
49
+ * CausalELM is lightweight—its only dependency is Tables.jl
40
50
* Codebase is high-quality, well tested, and regularly updated
41
51
42
- ### What's New?
43
- * Model summaries contain confidence intervals and marginal effects
44
- * Now includes doubly robust estimator for CATE estimation
45
- * All estimators now implement bagging to reduce predictive performance and reduce variance
46
- * Counterfactual consistency validation simulates more realistic violations of the counterfactual consistency assumption
47
- * Uses a simple heuristic to choose the number of neurons, which reduces training time and still works well in practice
48
- * Probability clipping for classifier predictions and residuals is no longer necessary due to the bagging procedure
52
+ ## What's New?
53
+ * Includes support for GPU-specific arrays and data structures that implement the Tables.jl API
54
+ * Only performs randomization inference when the inference argument is set to true in summarize methods
55
+ * Summaries support calculating marginal effects and confidence intervals
56
+ * Randomization inference now uses multithreading
57
+ * CausalELM was presented at JuliaCon 2024 in Eindhoven
58
+ * Refactored code to be easier to extend and understand
49
59
50
- ### What makes CausalELM different?
60
+ ## What makes CausalELM different?
51
61
Other packages, mainly EconML, DoWhy, CausalAI, and CausalML, have similar funcitonality.
52
62
Beides being written in Julia rather than Python, the main differences between CausalELM and
53
63
these libraries are:
@@ -69,15 +79,24 @@ these libraries are:
69
79
estimators provide p-values and standard errors generated via approximate randomization
70
80
inference.
71
81
* CausalELM strives to be lightweight while still being powerful and therefore does not
72
- have external dependencies: all the functions it uses are in the Julia standard library.
82
+ have external dependencies: all the functions it uses are in the Julia standard library
83
+ with the exception of model constructors, which use Tables.matrix to ensure integration
84
+ with a wide variety of data structures.
73
85
* The other packages and many others mostly use techniques from one field. Instead,
74
86
CausalELM incorporates a hodgepodge of ideas from statistics, machine learning,
75
87
econometrics, and biostatistics.
88
+ * CausalELM doesn't use any unnecessary abstractions. The only structs are the actual
89
+ models. Estimated effects are returned as arrays, summaries are returned in a dictionary,
90
+ and the results of validating an estimator are returned as tuples. This is in contrast
91
+ to other packages that utilize separate structs (classes) for summaries and inference
92
+ results.
76
93
77
- ### Installation
94
+ ## Installation
78
95
CausalELM requires Julia version 1.8 or greater and can be installed from the REPL as shown
79
96
below.
80
97
``` julia
81
98
using Pkg
82
99
Pkg. add (" CausalELM" )
83
100
```
101
+ ## More Information
102
+ For a more interactive overview, see our JuliaCon 2024 talk[ here] ( https://www.youtube.com/watch?v=hh_cyj8feu8&t=26s )
0 commit comments