@@ -38,26 +38,36 @@ prediction accuracy, generalization, ease of implementation, speed, and interpre
38
38
39
39
### What's New?
40
40
* Added support for dataframes
41
+ * Permutation of continuous treatments draws from a continuous, instead of discrete uniform distribution
42
+ during randomization inference
41
43
* Estimators can handle any array whose values are <: Real
42
44
* Estimator constructors are now called with model(X, T, Y) instead of model(X, Y, T)
43
45
* Improved documentation
44
46
* causalELM has a new logo
45
47
46
- ### Comparison with Other Packages
47
- Other packages, mainly EconML, DoWhy, and CausalML, have similar funcitonality. Beides being
48
- written in Julia rather than Python, the main differences between CausalELM and these
49
- libraries are:
50
-
51
- * causalELM uses extreme learning machines instead of tree-based, linear, or deep learners
52
- * causalELM performs cross validation during training
53
- * causalELM performs inference via asymptotic randomization inference rather than
54
- bootstrapping
55
- * causalELM does not require you to instantiate a model and pass it into a separate class
56
- or struct for training
57
- * causalELM creates train/test splits automatically
58
- * causalELM does not have external dependencies: all the functions it uses are in the
59
- Julia standard library
60
- * causalELM is simpler to use but has less flexibility than the other libraries
48
+ ### What makes causalELM different?
49
+ Other packages, mainly EconML, DoWhy, CausalAI, and CausalML, have similar funcitonality.
50
+ Beides being written in Julia rather than Python, the main differences between CausalELM and
51
+ these libraries are:
52
+ * Simplicity is core to casualELM's design philosophy. causalELM only uses one type of
53
+ machine learning model, extreme learning machines (with optional L2 regularization) and
54
+ does not require you to import any other packages or initialize machine learning models,
55
+ pass machine learning structs to causalELM's estimators, convert dataframes or arrays to
56
+ a special type, or one hot encode categorical treatments. By trading a little bit of
57
+ flexibility for a simple API, all of causalELM's functionality can be used with just
58
+ four lines of code.
59
+ * As part of this design principle, causalELM's estimators handle all of the work in
60
+ finding the best number of neurons during estimation. They create folds or rolling
61
+ rolling for time series data and use an extreme learning machine interpolator to find
62
+ the best number of neurons.
63
+ * causalELM's validate method, which is specific to each estimator, allows you to validate
64
+ or test the sentitivity of an estimator to possible violations of identifying assumptions.
65
+ * Unlike packages that do not allow you to estimate p-values and standard errors, use
66
+ bootstrapping to estimate them, or use incorrect hypothesis tests, all of causalELM's
67
+ estimators provide p-values and standard errors generated via approximate randomization
68
+ inference.
69
+ * causalELM strives to be lightweight while still being powerful and therefore does not
70
+ have external dependencies: all the functions it uses are in the Julia standard library.
61
71
62
72
### Installation
63
73
causalELM requires Julia version 1.7 or greater and can be installed from the REPL as shown
0 commit comments