Skip to content

EvaluationResult of T-test (and W-test) do not contain the original percentile/alpha value #262

Open
@pabloitu

Description

@pabloitu

The t-test requires an alpha value to create a confidence interval (e.g., 5%)

def paired_t_test(forecast, benchmark_forecast, observed_catalog,
alpha=0.05, scale=False):
from which information-gain bounds and 2-type error are return inside an EvaluationResult. However, this alpha value is then forgotten, which cause the EvaluationResult plotting to require recalling the original value of alpha with which the t-test was carried out.
percentile = plot_args.get('percentile', 95)

Not sure if creating a new attribute alpha of the resulting EvaluationResult

result = EvaluationResult()
result.name = 'Paired T-Test'
result.test_distribution = (out['ig_lower'], out['ig_upper'])
result.observed_statistic = out['information_gain']
result.quantile = (out['t_statistic'], out['t_critical'])
result.sim_name = (forecast.name, benchmark_forecast.name)
result.obs_name = observed_catalog.name
result.status = 'normal'
result.min_mw = numpy.min(forecast.magnitudes)

or to redefine the attributes of the t-test. For instance, shouldnt result.quantile, instead of result.test_distribution, contain actually the information_gain lower and upper bounds?

Also, the W-test confidence interval is calculated inside the plotting functions, instead of the evaluation function itself.

Metadata

Metadata

Assignees

Labels

No labels
No labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions