You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The flaw in most normality tests is that they tend to reject the assumption of normality when the number of samples is relatively large. As can be seen in some of the histograms on the following pages, some fairly ``normal'' looking distributions fail while decidedly non-normal distributions pass. For this reason, the $p$-value is less important than the qualitative appearance of the histogram. If the histogram exhibits the typical bell-shaped curve, this adds confidence to the statistical treatment of the data. If the histogram is not bell-shaped, this might cast doubt on the statistical treatment for that particular quantity.
@@ -304,7 +304,7 @@ \section{Summary of FDS Validation Git Statistics}
304
304
305
305
Table~\ref{validation_git_stats} shows the Git repository statistics for all of the validation datasets. For each dataset, the corresponding last changed date and Git revision string are shown. This indicates the Git revision string and date for which the most recent validation results for a given dataset were committed to the repository.
\caption[Summary of plume entrainment predictions]{A comparison of predicted and measured mass flow rates at various heights for the Harrison Spill Plume experiments.}
\caption[Results of Cup Burner experiments]{Comparison of measured and predicted minimum extinguishing volume fractions for the cup burner tests. Fuel type is indicated by color, and extinguishing agent is indicated by shape.}
\caption[Extinguishment times for the USCG/HAI water mist suppression tests]{Comparison of measured and predicted extinguishment times for the USCG/HAI water mist suppression tests.}
0 commit comments