diff --git a/docs/core/overview.mdx b/docs/core/overview.mdx
index fb7a2a4..5384992 100644
--- a/docs/core/overview.mdx
+++ b/docs/core/overview.mdx
@@ -273,6 +273,193 @@ Use `OnlineStatistics` when data arrives incrementally (streaming, event-driven
`OnlineStatistics` uses Welford's algorithm with the Terriberry extension for numerically stable single-pass computation of higher-order moments. The algorithm updates running sums of powers of deviations from the current mean, avoiding the catastrophic cancellation that affects naive two-pass formulas on large datasets.
+## Process Capability
+
+Process capability indices quantify how well a process fits within its specification limits. `processCapability(lsl, usl)` returns four Statistical Process Control (SPC) indices in one call:
+
+- **Cp, Cpk** — potential and actual capability using the sample standard deviation (short-term, within-subgroup spread).
+- **Pp, Ppk** — the same formulas using the population standard deviation (long-term, overall spread).
+
+The `-k` variants penalize a process that is off-center relative to the midpoint of the spec, so `Cpk` is never larger than `Cp`.
+
+{/*---FUN coreProcessCapability--*/}
+
+```kotlin
+// Ten parts measured against a spec window of [48, 52]
+val measurements = doubleArrayOf(
+ 50.0, 50.5, 49.5, 50.2, 49.8, 50.1, 49.9, 50.3, 49.7, 50.0
+)
+val capability = measurements.processCapability(lsl = 48.0, usl = 52.0)
+
+capability.cp // 2.2646 — potential capability (spread vs tolerance)
+capability.cpk // 2.2646 — actual capability (penalizes off-centering)
+capability.pp // 2.3870 — overall (population σ) counterpart of Cp
+capability.ppk // 2.3870 — overall counterpart of Cpk
+```
+
+{/*---END--*/}
+
+
+Use these indices only on a process that is already in statistical control (stable over time — see [Shewhart Control Charts](#shewhart-control-charts) below). For an unstable process the measured spread is not a fixed property of the process.
+
+
+Values `≥ 1.33` are usually considered capable; `≥ 1.67` highly capable. When `Cpk ≪ Cp`, re-center the process before trying to reduce variance.
+
+
+$$
+\mathrm{Cp} = \frac{\mathrm{USL} - \mathrm{LSL}}{6\sigma_s}, \qquad
+\mathrm{Cpk} = \min\!\left(\frac{\mathrm{USL} - \bar{x}}{3\sigma_s},\ \frac{\bar{x} - \mathrm{LSL}}{3\sigma_s}\right)
+$$
+
+Pp and Ppk use the population standard deviation $\sigma_p$ (divisor $n$) in place of the sample standard deviation $\sigma_s$ (divisor $n-1$). `processCapability` computes both in a single numerically stable Welford pass.
+
+
+## Shewhart Control Charts
+
+Shewhart control charts plot subgroup statistics over time with three-sigma control limits. `xBarRChart()` monitors the process mean together with the range within each subgroup; `xBarSChart()` uses the sample standard deviation instead — more efficient for subgroup sizes above 10. Both require equal-sized subgroups of 2–25 observations.
+
+{/*---FUN coreXBarRChart--*/}
+
+```kotlin
+// Five subgroups of four parts; bracket width monitored per batch
+val subgroups = listOf(
+ doubleArrayOf(72.0, 84.0, 79.0, 49.0),
+ doubleArrayOf(56.0, 87.0, 33.0, 42.0),
+ doubleArrayOf(55.0, 73.0, 22.0, 60.0),
+ doubleArrayOf(44.0, 80.0, 54.0, 74.0),
+ doubleArrayOf(97.0, 26.0, 48.0, 58.0),
+)
+val chart = xBarRChart(subgroups)
+
+chart.centerLine // 59.65 — grand mean (x-double-bar)
+chart.ucl // 95.6626 — upper control limit for the mean
+chart.lcl // 23.6374 — lower control limit for the mean
+chart.rChart.centerLine // 49.4 — average range (R-bar)
+chart.rChart.ucl // 112.7308 — upper limit for within-subgroup range
+chart.rChart.lcl // 0.0 — lower limit (D₃ = 0 for n ≤ 6)
+```
+
+{/*---END--*/}
+
+Control limits use the standard SPC constants $A_2, A_3, D_3, D_4, B_3, B_4, c_4$ (Montgomery, *Introduction to Statistical Quality Control*, Appendix VI) available for subgroup sizes 2–25 through `spcConstants(n)`.
+
+
+For $k$ subgroups of size $n$ with subgroup means $\bar{x}_i$, ranges $R_i$, and standard deviations $s_i$:
+
+$$
+\text{x̄-R:}\quad \mathrm{UCL}/\mathrm{LCL} = \bar{\bar{x}} \pm A_2 \bar{R}, \quad R\text{-chart: } [D_3 \bar{R},\ D_4 \bar{R}]
+$$
+
+$$
+\text{x̄-S:}\quad \mathrm{UCL}/\mathrm{LCL} = \bar{\bar{x}} \pm A_3 \bar{s}, \quad S\text{-chart: } [B_3 \bar{s},\ B_4 \bar{s}]
+$$
+
+
+## CUSUM Chart
+
+A Shewhart chart reacts slowly to drifts of less than 2σ because every point is judged in isolation. `cusum()` accumulates deviations from target over time, so a 0.5σ–1σ drift is detected within a few observations. The two-sided tabular form tracks an upper sum $C^+$ that catches upward shifts and a lower sum $C^-$ for downward shifts. An alarm fires on the first index where either sum exceeds the decision interval $H$.
+
+{/*---FUN coreCusum--*/}
+
+```kotlin
+// Individual measurements from a process with target 10, drifting upward
+val observations = doubleArrayOf(10.2, 10.4, 10.6, 10.9, 11.2, 11.5, 11.8, 12.0)
+val result = cusum(observations, target = 10.0, k = 0.5, h = 3.0)
+
+result.sPlus // [0.0, 0.0, 0.1, 0.5, 1.2, 2.2, 3.5, 5.0]
+result.sMinus // all zero — no downward drift
+result.alarmIndex // 6 — first index where C⁺ > H
+```
+
+{/*---END--*/}
+
+
+Tune `k` to half the shift size you want to detect, in units of σ — a common default is $K \approx 0.5\sigma$ targeting a 1σ drift. Set `h` to 4σ–5σ to match the in-control run length of a 3σ Shewhart chart while reacting much faster to small shifts.
+
+
+
+$$
+C^+_i = \max\bigl(0,\; C^+_{i-1} + (x_i - \mu_0 - K)\bigr), \qquad
+C^-_i = \max\bigl(0,\; C^-_{i-1} + (\mu_0 - K - x_i)\bigr)
+$$
+
+starting from $C^\pm_0 = 0$, with an alarm the first time $C^+_i > H$ or $C^-_i > H$.
+
+
+## EWMA Chart
+
+EWMA (Roberts, 1959) is the other classic small-shift detector. Instead of an unbounded cumulative sum, `ewma()` maintains a weighted moving average that gives recent observations more influence while retaining memory of the past. Control limits widen with time until they reach a steady state, so the chart is most sensitive early — useful for catching an initial shift.
+
+{/*---FUN coreEwma--*/}
+
+```kotlin
+// EWMA chart: target = 25, σ = 1, λ = 0.2, L = 3
+val observations = doubleArrayOf(25.0, 24.5, 25.2, 26.1, 25.8, 27.0, 26.5, 28.0)
+val result = ewma(
+ observations,
+ target = 25.0,
+ sigma = 1.0,
+ lambda = 0.2,
+ controlLimitWidth = 3.0
+)
+
+result.smoothedValues[0] // 25.0 — Z₀ = λ·x + (1-λ)·target
+result.smoothedValues[7] // 26.2549 — smoothed statistic at t = 7
+result.ucl[0] // 25.6 — narrow at first, widens with t
+result.ucl[7] // 25.9858 — approaching steady state
+result.outOfControl // [7] — Z₇ exceeds UCL₇
+```
+
+{/*---END--*/}
+
+
+$\lambda = 0.2$ with $L \approx 2.7$–$3.0$ is a common default. Smaller $\lambda$ emphasizes memory and detects smaller shifts; $\lambda = 1$ collapses EWMA into a Shewhart individuals chart.
+
+
+
+$$
+Z_t = \lambda x_t + (1 - \lambda) Z_{t-1}, \qquad Z_0 = \mu_0
+$$
+
+$$
+\mathrm{UCL}_t/\mathrm{LCL}_t = \mu_0 \pm L \sigma \sqrt{\frac{\lambda}{2 - \lambda}\,\bigl(1 - (1 - \lambda)^{2t}\bigr)}
+$$
+
+
+## Western Electric Rules
+
+`westernElectricRules()` extends a Shewhart chart beyond the basic ±3σ check with four run-length heuristics that catch trends, clusters, and prolonged one-sided runs.
+
+| Rule | Pattern | Detects |
+|------|---------|---------|
+| **1** | 1 point beyond $\pm 3\sigma$ | extreme single excursion |
+| **2** | 2 of last 3 points beyond $\pm 2\sigma$, same side | strong shift |
+| **3** | 4 of last 5 points beyond $\pm 1\sigma$, same side | moderate shift |
+| **4** | 8 consecutive points on the same side of the center | sustained shift, any magnitude |
+
+Each rule's array contains the *trigger indices* — the observation whose arrival completes the offending pattern.
+
+{/*---FUN coreWesternElectricRules--*/}
+
+```kotlin
+// Process drifting upward in the last four observations
+val observations = doubleArrayOf(
+ 0.1, 0.2, -0.3, 0.0, 1.4, 1.2, 2.4, 2.6, 3.5, 2.2
+)
+val violations = westernElectricRules(observations, center = 0.0, sigma = 1.0)
+
+violations.rule1 // indices of points beyond ±3σ
+violations.rule2 // indices where 2 of last 3 points are beyond ±2σ (same side)
+violations.rule3 // indices where 4 of last 5 points are beyond ±1σ (same side)
+violations.rule4 // indices where 8 consecutive points fall on the same side
+```
+
+{/*---END--*/}
+
+
+Combine a Shewhart chart (large shifts) with CUSUM or EWMA (small shifts) and Western Electric Rules (patterns) — the three views together catch the widest range of out-of-control conditions.
+
+
## Error Handling
diff --git a/docs/de/core/overview.mdx b/docs/de/core/overview.mdx
index 7f7b830..38bf3b5 100644
--- a/docs/de/core/overview.mdx
+++ b/docs/de/core/overview.mdx
@@ -273,6 +273,193 @@ Verwenden Sie `OnlineStatistics`, wenn Daten inkrementell eintreffen (Streaming,
`OnlineStatistics` verwendet Welfords Algorithmus mit der Terriberry-Erweiterung für numerisch stabile Einpass-Berechnung höherer Momente. Der Algorithmus aktualisiert laufende Summen von Potenzen der Abweichungen vom aktuellen Mittelwert und vermeidet so die katastrophale Auslöschung, die naive Zweipass-Formeln bei großen Datensätzen betrifft.
+## Prozessfähigkeit
+
+Prozessfähigkeitsindizes quantifizieren, wie gut ein Prozess innerhalb seiner Spezifikationsgrenzen liegt. `processCapability(lsl, usl)` liefert vier SPC-Indizes in einem Aufruf:
+
+- **Cp, Cpk** — potentielle und tatsächliche Fähigkeit mit der Stichproben-Standardabweichung (kurzfristige, innerhalb von Untergruppen auftretende Streuung).
+- **Pp, Ppk** — dieselben Formeln mit der Populations-Standardabweichung (langfristige Gesamtstreuung).
+
+Die `-k`-Varianten bestrafen einen Prozess, der relativ zur Spezifikationsmitte dezentriert ist, sodass `Cpk` nie größer als `Cp` ist.
+
+{/*---FUN coreProcessCapability--*/}
+
+```kotlin
+// Ten parts measured against a spec window of [48, 52]
+val measurements = doubleArrayOf(
+ 50.0, 50.5, 49.5, 50.2, 49.8, 50.1, 49.9, 50.3, 49.7, 50.0
+)
+val capability = measurements.processCapability(lsl = 48.0, usl = 52.0)
+
+capability.cp // 2.2646 — potential capability (spread vs tolerance)
+capability.cpk // 2.2646 — actual capability (penalizes off-centering)
+capability.pp // 2.3870 — overall (population σ) counterpart of Cp
+capability.ppk // 2.3870 — overall counterpart of Cpk
+```
+
+{/*---END--*/}
+
+
+Verwenden Sie diese Indizes nur für einen bereits statistisch beherrschten Prozess (stabil über die Zeit — siehe [Shewhart-Kontrollkarten](#shewhart-kontrollkarten) unten). Für einen instabilen Prozess ist die gemessene Streuung keine feste Prozesseigenschaft.
+
+
+Werte `≥ 1.33` gelten üblicherweise als fähig, `≥ 1.67` als hochgradig fähig. Wenn `Cpk ≪ Cp`, sollten Sie den Prozess zuerst neu zentrieren, bevor Sie versuchen, die Varianz zu reduzieren.
+
+
+$$
+\mathrm{Cp} = \frac{\mathrm{USL} - \mathrm{LSL}}{6\sigma_s}, \qquad
+\mathrm{Cpk} = \min\!\left(\frac{\mathrm{USL} - \bar{x}}{3\sigma_s},\ \frac{\bar{x} - \mathrm{LSL}}{3\sigma_s}\right)
+$$
+
+Pp und Ppk verwenden die Populations-Standardabweichung $\sigma_p$ (Divisor $n$) anstelle der Stichproben-Standardabweichung $\sigma_s$ (Divisor $n-1$). `processCapability` berechnet beide in einem einzigen numerisch stabilen Welford-Durchlauf.
+
+
+## Shewhart-Kontrollkarten
+
+Shewhart-Kontrollkarten zeichnen Untergruppenstatistiken über die Zeit mit Drei-Sigma-Grenzen auf. `xBarRChart()` überwacht den Prozessmittelwert zusammen mit der Spannweite innerhalb jeder Untergruppe; `xBarSChart()` nutzt stattdessen die Stichproben-Standardabweichung — effizienter für Untergruppengrößen über 10. Beide benötigen gleich große Untergruppen mit 2–25 Beobachtungen.
+
+{/*---FUN coreXBarRChart--*/}
+
+```kotlin
+// Five subgroups of four parts; bracket width monitored per batch
+val subgroups = listOf(
+ doubleArrayOf(72.0, 84.0, 79.0, 49.0),
+ doubleArrayOf(56.0, 87.0, 33.0, 42.0),
+ doubleArrayOf(55.0, 73.0, 22.0, 60.0),
+ doubleArrayOf(44.0, 80.0, 54.0, 74.0),
+ doubleArrayOf(97.0, 26.0, 48.0, 58.0),
+)
+val chart = xBarRChart(subgroups)
+
+chart.centerLine // 59.65 — grand mean (x-double-bar)
+chart.ucl // 95.6626 — upper control limit for the mean
+chart.lcl // 23.6374 — lower control limit for the mean
+chart.rChart.centerLine // 49.4 — average range (R-bar)
+chart.rChart.ucl // 112.7308 — upper limit for within-subgroup range
+chart.rChart.lcl // 0.0 — lower limit (D₃ = 0 for n ≤ 6)
+```
+
+{/*---END--*/}
+
+Die Kontrollgrenzen basieren auf den Standard-SPC-Konstanten $A_2, A_3, D_3, D_4, B_3, B_4, c_4$ (Montgomery, *Introduction to Statistical Quality Control*, Anhang VI), tabelliert für Untergruppengrößen 2–25 und direkt über `spcConstants(n)` verfügbar.
+
+
+Für $k$ Untergruppen der Größe $n$ mit Untergruppenmittelwerten $\bar{x}_i$, Spannweiten $R_i$ und Standardabweichungen $s_i$:
+
+$$
+\text{x̄-R:}\quad \mathrm{UCL}/\mathrm{LCL} = \bar{\bar{x}} \pm A_2 \bar{R}, \quad R\text{-Karte: } [D_3 \bar{R},\ D_4 \bar{R}]
+$$
+
+$$
+\text{x̄-S:}\quad \mathrm{UCL}/\mathrm{LCL} = \bar{\bar{x}} \pm A_3 \bar{s}, \quad S\text{-Karte: } [B_3 \bar{s},\ B_4 \bar{s}]
+$$
+
+
+## CUSUM-Karte
+
+Eine Shewhart-Karte reagiert langsam auf Drifts unter 2σ, weil jeder Punkt isoliert bewertet wird. `cusum()` akkumuliert Abweichungen vom Zielwert über die Zeit, sodass eine Drift von 0.5σ–1σ innerhalb weniger Beobachtungen erkannt wird. Die zweiseitige tabellarische Form verfolgt eine obere Summe $C^+$ für Aufwärtsverschiebungen und eine untere Summe $C^-$ für Abwärtsverschiebungen. Ein Alarm wird beim ersten Index ausgelöst, an dem eine der Summen das Entscheidungsintervall $H$ überschreitet.
+
+{/*---FUN coreCusum--*/}
+
+```kotlin
+// Individual measurements from a process with target 10, drifting upward
+val observations = doubleArrayOf(10.2, 10.4, 10.6, 10.9, 11.2, 11.5, 11.8, 12.0)
+val result = cusum(observations, target = 10.0, k = 0.5, h = 3.0)
+
+result.sPlus // [0.0, 0.0, 0.1, 0.5, 1.2, 2.2, 3.5, 5.0]
+result.sMinus // all zero — no downward drift
+result.alarmIndex // 6 — first index where C⁺ > H
+```
+
+{/*---END--*/}
+
+
+Stellen Sie `k` auf die Hälfte der zu erkennenden Verschiebungsgröße in Einheiten von σ ein — gebräuchlicher Standard ist $K \approx 0.5\sigma$, was auf eine 1σ-Drift abzielt. Setzen Sie `h` auf 4σ–5σ, um die mittlere Lauflänge in-Kontrolle einer 3σ-Shewhart-Karte zu erreichen, bei deutlich schnellerer Reaktion auf kleine Verschiebungen.
+
+
+
+$$
+C^+_i = \max\bigl(0,\; C^+_{i-1} + (x_i - \mu_0 - K)\bigr), \qquad
+C^-_i = \max\bigl(0,\; C^-_{i-1} + (\mu_0 - K - x_i)\bigr)
+$$
+
+startend mit $C^\pm_0 = 0$, Alarm beim ersten $C^+_i > H$ bzw. $C^-_i > H$.
+
+
+## EWMA-Karte
+
+EWMA (Roberts, 1959) ist das zweite klassische Werkzeug zur Erkennung kleiner Verschiebungen. Statt einer unbeschränkten laufenden Summe führt `ewma()` einen gewichteten gleitenden Mittelwert, der aktuellen Beobachtungen mehr Gewicht gibt, aber ein Gedächtnis der Vergangenheit behält. Die Kontrollgrenzen weiten sich mit der Zeit, bis sie einen stationären Wert erreichen — die Karte ist früh am empfindlichsten, was eine anfängliche Verschiebung zuverlässig erfasst.
+
+{/*---FUN coreEwma--*/}
+
+```kotlin
+// EWMA chart: target = 25, σ = 1, λ = 0.2, L = 3
+val observations = doubleArrayOf(25.0, 24.5, 25.2, 26.1, 25.8, 27.0, 26.5, 28.0)
+val result = ewma(
+ observations,
+ target = 25.0,
+ sigma = 1.0,
+ lambda = 0.2,
+ controlLimitWidth = 3.0
+)
+
+result.smoothedValues[0] // 25.0 — Z₀ = λ·x + (1-λ)·target
+result.smoothedValues[7] // 26.2549 — smoothed statistic at t = 7
+result.ucl[0] // 25.6 — narrow at first, widens with t
+result.ucl[7] // 25.9858 — approaching steady state
+result.outOfControl // [7] — Z₇ exceeds UCL₇
+```
+
+{/*---END--*/}
+
+
+$\lambda = 0.2$ mit $L \approx 2.7$–$3.0$ ist ein gebräuchlicher Standard. Kleineres $\lambda$ betont das Gedächtnis und erkennt kleinere Verschiebungen; $\lambda = 1$ reduziert EWMA auf eine Shewhart-Einzelwertkarte.
+
+
+
+$$
+Z_t = \lambda x_t + (1 - \lambda) Z_{t-1}, \qquad Z_0 = \mu_0
+$$
+
+$$
+\mathrm{UCL}_t/\mathrm{LCL}_t = \mu_0 \pm L \sigma \sqrt{\frac{\lambda}{2 - \lambda}\,\bigl(1 - (1 - \lambda)^{2t}\bigr)}
+$$
+
+
+## Western-Electric-Regeln
+
+`westernElectricRules()` erweitert eine Shewhart-Karte über den einfachen ±3σ-Check hinaus mit vier Lauflängen-Heuristiken, die Trends, Cluster und anhaltende einseitige Läufe erkennen.
+
+| Regel | Muster | Erkennt |
+|-------|--------|---------|
+| **1** | 1 Punkt jenseits $\pm 3\sigma$ | extreme Einzelausschläge |
+| **2** | 2 der letzten 3 Punkte jenseits $\pm 2\sigma$, gleiche Seite | starke Verschiebung |
+| **3** | 4 der letzten 5 Punkte jenseits $\pm 1\sigma$, gleiche Seite | mittlere Verschiebung |
+| **4** | 8 aufeinanderfolgende Punkte auf derselben Seite der Mittellinie | anhaltende Verschiebung beliebiger Größe |
+
+Das Array jeder Regel enthält die *Auslöseindizes* — jene Beobachtung, deren Eintreffen das betreffende Muster vervollständigt.
+
+{/*---FUN coreWesternElectricRules--*/}
+
+```kotlin
+// Process drifting upward in the last four observations
+val observations = doubleArrayOf(
+ 0.1, 0.2, -0.3, 0.0, 1.4, 1.2, 2.4, 2.6, 3.5, 2.2
+)
+val violations = westernElectricRules(observations, center = 0.0, sigma = 1.0)
+
+violations.rule1 // indices of points beyond ±3σ
+violations.rule2 // indices where 2 of last 3 points are beyond ±2σ (same side)
+violations.rule3 // indices where 4 of last 5 points are beyond ±1σ (same side)
+violations.rule4 // indices where 8 consecutive points fall on the same side
+```
+
+{/*---END--*/}
+
+
+Kombinieren Sie eine Shewhart-Karte (große Verschiebungen) mit CUSUM oder EWMA (kleine Verschiebungen) und den Western-Electric-Regeln (Muster) — die drei Sichtweisen zusammen decken die breiteste Palette außer-Kontrolle-Zustände ab.
+
+
## Fehlerbehandlung
diff --git a/docs/de/getting-started/installation.mdx b/docs/de/getting-started/installation.mdx
index 48766d0..dff0fd8 100644
--- a/docs/de/getting-started/installation.mdx
+++ b/docs/de/getting-started/installation.mdx
@@ -75,7 +75,7 @@ Beginnen Sie mit `kstats-core` für deskriptive Zusammenfassungen. Fügen Sie `k
## Nächste Schritte
-
+
Die erste Analyse mit Beispielen aus allen Modulen durchführen.
diff --git a/docs/de/getting-started/introduction.mdx b/docs/de/getting-started/introduction.mdx
index 805f863..7b71788 100644
--- a/docs/de/getting-started/introduction.mdx
+++ b/docs/de/getting-started/introduction.mdx
@@ -67,7 +67,7 @@ fitted.cdf(6.0) // 0.6335
BOM oder einzelnes Modul zu einem Gradle-KTS-Projekt hinzufügen.
-
+
Eine Zusammenfassung berechnen, eine Verteilung anpassen und einen Hypothesentest durchführen.
diff --git a/docs/de/hypothesis/overview.mdx b/docs/de/hypothesis/overview.mdx
index 6e4fd01..c78368b 100644
--- a/docs/de/hypothesis/overview.mdx
+++ b/docs/de/hypothesis/overview.mdx
@@ -320,6 +320,76 @@ result.pValue // p-value
{/*---END--*/}
+## Sind Beobachtungen Ausreißer?
+
+### Grubbs-Test
+
+Der Grubbs-Test (Extreme Studentized Deviate Test) prüft formell, ob die am weitesten vom Mittelwert entfernte Beobachtung ein Ausreißer ist, vorausgesetzt, die übrigen Daten sind näherungsweise normalverteilt. Die Teststatistik lautet
+
+$$
+G = \frac{\max_{i} |x_i - \bar{x}|}{s},
+$$
+
+umgerechnet in eine Student-$t$-Statistik mit $N - 2$ Freiheitsgraden und Bonferroni-korrigiert dafür, dass jede Beobachtung getestet wurde.
+
+{/*---FUN hypGrubbsSingle--*/}
+
+```kotlin
+// Response times (ms) with a suspected outlier
+val latencies = doubleArrayOf(12.0, 14.0, 11.0, 13.0, 15.0, 98.0, 12.0)
+
+val result = grubbsTest(latencies)
+result.statistic // G statistic
+result.pValue // Bonferroni-corrected p-value
+result.additionalInfo["outlierIndex"] // index of the suspected outlier
+result.additionalInfo["outlierValue"] // the suspected outlier's value
+result.isSignificant() // true if outlier is significant at α = 0.05
+```
+
+{/*---END--*/}
+
+Verwenden Sie `Alternative.GREATER` oder `Alternative.LESS`, um nur einen Tail zu testen, wenn Sie ausschließlich an einem auffällig großen bzw. kleinen Wert interessiert sind:
+
+{/*---FUN hypGrubbsDirection--*/}
+
+```kotlin
+// Only test for a suspiciously large value (upper tail)
+val data = doubleArrayOf(2.1, 2.5, 2.3, 2.8, 10.0, 2.4, 2.2)
+val upper = grubbsTest(data, alternative = Alternative.GREATER)
+upper.additionalInfo["outlierValue"] // 10.0 — the maximum
+
+// Only test for a suspiciously small value (lower tail)
+val dataLow = doubleArrayOf(2.1, 2.5, 2.3, 2.8, -5.0, 2.4, 2.2)
+val lower = grubbsTest(dataLow, alternative = Alternative.LESS)
+lower.additionalInfo["outlierValue"] // -5.0 — the minimum
+```
+
+{/*---END--*/}
+
+Für mehrere Ausreißer wendet `grubbsTestIterative()` den Test wiederholt an und entfernt jeweils einen signifikanten Ausreißer, bis keiner mehr übrig ist oder die Stichprobe unter drei Beobachtungen fällt.
+
+{/*---FUN hypGrubbsIterative--*/}
+
+```kotlin
+// Remove multiple outliers by repeatedly applying the test
+val data = doubleArrayOf(10.0, 11.0, 12.0, 13.0, 14.0, 80.0, 90.0)
+val cleaned = grubbsTestIterative(data, alpha = 0.05)
+
+cleaned.outlierIndices // indices (in the original array) that were removed
+cleaned.cleanedData // observations after removing all detected outliers
+cleaned.iterations // TestResult from each round (last one is non-significant)
+```
+
+{/*---END--*/}
+
+
+Der Grubbs-Test setzt voraus, dass die Daten bis auf den Ausreißer näherungsweise normalverteilt sind. Validieren Sie mit `shapiroWilkTest()` an den Daten nach Entfernen des Verdächtigen, bevor Sie ein signifikantes Ergebnis berichten.
+
+
+
+Das iterative Verfahren kann Ausreißer **maskieren**, wenn mehrere Extreme zusammenliegen — jeder Einzeltest wird durch seine Nachbarn verdünnt. Bevorzugen Sie bei großen Clustern einen speziellen Mehrfach-Ausreißer-Test (z. B. verallgemeinertes ESD) oder einen robusten Schätzer.
+
+
## Die Alternative-Aufzählung
`Alternative` steuert die Richtung des Tests:
diff --git a/docs/getting-started/installation.mdx b/docs/getting-started/installation.mdx
index cf677b5..8e7e686 100644
--- a/docs/getting-started/installation.mdx
+++ b/docs/getting-started/installation.mdx
@@ -75,7 +75,7 @@ Start with `kstats-core` for descriptive summaries. Add `kstats-distributions` w
## Next Steps
-
+
Run the first analysis with examples across all modules.
diff --git a/docs/getting-started/introduction.mdx b/docs/getting-started/introduction.mdx
index 25907e4..b0d9110 100644
--- a/docs/getting-started/introduction.mdx
+++ b/docs/getting-started/introduction.mdx
@@ -67,7 +67,7 @@ fitted.cdf(6.0) // 0.6335
Add the BOM or a single module to a Gradle KTS project.
-
+
Run a summary, fit a distribution, and execute a hypothesis test.
diff --git a/docs/hypothesis/overview.mdx b/docs/hypothesis/overview.mdx
index 7944e41..4a61cd1 100644
--- a/docs/hypothesis/overview.mdx
+++ b/docs/hypothesis/overview.mdx
@@ -320,6 +320,76 @@ result.pValue // p-value
{/*---END--*/}
+## Are any observations outliers?
+
+### Grubbs' test
+
+Grubbs' test (the extreme studentized deviate test) formally checks whether the observation farthest from the mean is an outlier, assuming the remaining data is approximately normal. The test statistic is
+
+$$
+G = \frac{\max_{i} |x_i - \bar{x}|}{s},
+$$
+
+converted to a Student-$t$ statistic on $N - 2$ degrees of freedom and Bonferroni-corrected for having tested every observation.
+
+{/*---FUN hypGrubbsSingle--*/}
+
+```kotlin
+// Response times (ms) with a suspected outlier
+val latencies = doubleArrayOf(12.0, 14.0, 11.0, 13.0, 15.0, 98.0, 12.0)
+
+val result = grubbsTest(latencies)
+result.statistic // G statistic
+result.pValue // Bonferroni-corrected p-value
+result.additionalInfo["outlierIndex"] // index of the suspected outlier
+result.additionalInfo["outlierValue"] // the suspected outlier's value
+result.isSignificant() // true if outlier is significant at α = 0.05
+```
+
+{/*---END--*/}
+
+Use `Alternative.GREATER` or `Alternative.LESS` to test a single tail when you only care about a suspiciously large or small value:
+
+{/*---FUN hypGrubbsDirection--*/}
+
+```kotlin
+// Only test for a suspiciously large value (upper tail)
+val data = doubleArrayOf(2.1, 2.5, 2.3, 2.8, 10.0, 2.4, 2.2)
+val upper = grubbsTest(data, alternative = Alternative.GREATER)
+upper.additionalInfo["outlierValue"] // 10.0 — the maximum
+
+// Only test for a suspiciously small value (lower tail)
+val dataLow = doubleArrayOf(2.1, 2.5, 2.3, 2.8, -5.0, 2.4, 2.2)
+val lower = grubbsTest(dataLow, alternative = Alternative.LESS)
+lower.additionalInfo["outlierValue"] // -5.0 — the minimum
+```
+
+{/*---END--*/}
+
+For multiple outliers, `grubbsTestIterative()` reapplies the test and removes one significant outlier at a time until none remain or the sample shrinks below three observations.
+
+{/*---FUN hypGrubbsIterative--*/}
+
+```kotlin
+// Remove multiple outliers by repeatedly applying the test
+val data = doubleArrayOf(10.0, 11.0, 12.0, 13.0, 14.0, 80.0, 90.0)
+val cleaned = grubbsTestIterative(data, alpha = 0.05)
+
+cleaned.outlierIndices // indices (in the original array) that were removed
+cleaned.cleanedData // observations after removing all detected outliers
+cleaned.iterations // TestResult from each round (last one is non-significant)
+```
+
+{/*---END--*/}
+
+
+Grubbs' test assumes the data is approximately normal apart from the outlier. Validate with `shapiroWilkTest()` on the data with the suspected extreme removed before reporting a significant result.
+
+
+
+The iterative procedure can **mask** outliers when several extremes cluster together — each test may be diluted by its peers. For large clusters prefer a dedicated multiple-outlier test (e.g. generalized ESD) or a robust estimator.
+
+
## The Alternative Enum
`Alternative` controls the direction of the test:
diff --git a/kstats-core/src/commonTest/kotlin/org/oremif/kstats/samples/DocsSamples.kt b/kstats-core/src/commonTest/kotlin/org/oremif/kstats/samples/DocsSamples.kt
index bd08ee4..0712ecc 100644
--- a/kstats-core/src/commonTest/kotlin/org/oremif/kstats/samples/DocsSamples.kt
+++ b/kstats-core/src/commonTest/kotlin/org/oremif/kstats/samples/DocsSamples.kt
@@ -3,6 +3,7 @@ package org.oremif.kstats.samples
import org.oremif.kstats.descriptive.*
import kotlin.test.Test
import kotlin.test.assertEquals
+import kotlin.test.assertTrue
class DocsSamples {
@@ -349,6 +350,162 @@ class DocsSamples {
// SampleEnd
}
+ @Test
+ fun coreProcessCapability() {
+ // SampleStart
+ // Ten parts measured against a spec window of [48, 52]
+ val measurements = doubleArrayOf(
+ 50.0, 50.5, 49.5, 50.2, 49.8, 50.1, 49.9, 50.3, 49.7, 50.0
+ )
+ val capability = measurements.processCapability(lsl = 48.0, usl = 52.0)
+
+ capability.cp // 2.2646 — potential capability (spread vs tolerance)
+ capability.cpk // 2.2646 — actual capability (penalizes off-centering)
+ capability.pp // 2.3870 — overall (population σ) counterpart of Cp
+ capability.ppk // 2.3870 — overall counterpart of Cpk
+ // SampleEnd
+ assertEquals(2.26455406828919, capability.cp, 1e-4)
+ assertEquals(2.26455406828918, capability.cpk, 1e-4)
+ assertEquals(2.38704958013144, capability.pp, 1e-4)
+ assertEquals(2.38704958013144, capability.ppk, 1e-4)
+ }
+
+ @Test
+ fun coreXBarRChart() {
+ // SampleStart
+ // Five subgroups of four parts; bracket width monitored per batch
+ val subgroups = listOf(
+ doubleArrayOf(72.0, 84.0, 79.0, 49.0),
+ doubleArrayOf(56.0, 87.0, 33.0, 42.0),
+ doubleArrayOf(55.0, 73.0, 22.0, 60.0),
+ doubleArrayOf(44.0, 80.0, 54.0, 74.0),
+ doubleArrayOf(97.0, 26.0, 48.0, 58.0),
+ )
+ val chart = xBarRChart(subgroups)
+
+ chart.centerLine // 59.65 — grand mean (x-double-bar)
+ chart.ucl // 95.6626 — upper control limit for the mean
+ chart.lcl // 23.6374 — lower control limit for the mean
+ chart.rChart.centerLine // 49.4 — average range (R-bar)
+ chart.rChart.ucl // 112.7308 — upper limit for within-subgroup range
+ chart.rChart.lcl // 0.0 — lower limit (D₃ = 0 for n ≤ 6)
+ // SampleEnd
+ assertEquals(59.65, chart.centerLine, 1e-4)
+ assertEquals(95.6626, chart.ucl, 1e-4)
+ assertEquals(23.6374, chart.lcl, 1e-4)
+ assertEquals(49.4, chart.rChart.centerLine, 1e-4)
+ assertEquals(112.7308, chart.rChart.ucl, 1e-4)
+ assertEquals(0.0, chart.rChart.lcl, 1e-4)
+ }
+
+ @Test
+ fun coreXBarSChart() {
+ // SampleStart
+ // Same subgroups, S chart uses sample standard deviation instead of range
+ val subgroups = listOf(
+ doubleArrayOf(10.0, 12.0, 11.0, 13.0, 9.0),
+ doubleArrayOf(11.0, 10.0, 12.0, 11.0, 14.0),
+ doubleArrayOf(9.0, 13.0, 10.0, 12.0, 11.0),
+ )
+ val chart = xBarSChart(subgroups)
+
+ chart.centerLine // 11.2 — grand mean
+ chart.ucl // upper control limit for the mean (uses A₃)
+ chart.lcl // lower control limit for the mean
+ chart.sChart.centerLine // S-bar — average subgroup standard deviation
+ chart.sChart.ucl // upper limit for within-subgroup spread (B₄)
+ chart.sChart.lcl // lower limit (B₃ = 0 for n ≤ 5)
+ // SampleEnd
+ assertEquals(11.2, chart.centerLine, 1e-4)
+ assertTrue(chart.ucl > chart.centerLine)
+ assertTrue(chart.lcl < chart.centerLine)
+ assertTrue(chart.sChart.centerLine > 0.0)
+ assertTrue(chart.sChart.ucl >= chart.sChart.lcl)
+ }
+
+ @Test
+ fun coreSpcConstants() {
+ // SampleStart
+ val c = spcConstants(subgroupSize = 5)
+ c.a2 // 0.577 — x-bar factor from R-bar
+ c.a3 // 1.427 — x-bar factor from S-bar
+ c.d3 // 0.000 — R-chart lower factor (zero for n ≤ 6)
+ c.d4 // 2.114 — R-chart upper factor
+ c.b3 // 0.000 — S-chart lower factor
+ c.b4 // 2.089 — S-chart upper factor
+ c.c4 // 0.9400 — bias correction for sample σ
+ // SampleEnd
+ assertEquals(0.577, c.a2, 0.0)
+ assertEquals(1.427, c.a3, 0.0)
+ assertEquals(0.0, c.d3, 0.0)
+ assertEquals(2.114, c.d4, 0.0)
+ assertEquals(0.0, c.b3, 0.0)
+ assertEquals(2.089, c.b4, 0.0)
+ assertEquals(0.9400, c.c4, 0.0)
+ }
+
+ @Test
+ fun coreCusum() {
+ // SampleStart
+ // Individual measurements from a process with target 10, drifting upward
+ val observations = doubleArrayOf(10.2, 10.4, 10.6, 10.9, 11.2, 11.5, 11.8, 12.0)
+ val result = cusum(observations, target = 10.0, k = 0.5, h = 3.0)
+
+ result.sPlus // [0.0, 0.0, 0.1, 0.5, 1.2, 2.2, 3.5, 5.0]
+ result.sMinus // all zero — no downward drift
+ result.alarmIndex // 6 — first index where C⁺ > H
+ // SampleEnd
+ assertEquals(0.0, result.sPlus[0], 1e-10)
+ assertEquals(3.5, result.sPlus[6], 1e-10)
+ assertEquals(5.0, result.sPlus[7], 1e-10)
+ for (v in result.sMinus) assertEquals(0.0, v, 1e-10)
+ assertEquals(6, result.alarmIndex)
+ }
+
+ @Test
+ fun coreEwma() {
+ // SampleStart
+ // EWMA chart: target = 25, σ = 1, λ = 0.2, L = 3
+ val observations = doubleArrayOf(25.0, 24.5, 25.2, 26.1, 25.8, 27.0, 26.5, 28.0)
+ val result = ewma(
+ observations,
+ target = 25.0,
+ sigma = 1.0,
+ lambda = 0.2,
+ controlLimitWidth = 3.0
+ )
+
+ result.smoothedValues[0] // 25.0 — Z₀ = λ·x + (1-λ)·target
+ result.smoothedValues[7] // 26.2549 — smoothed statistic at t = 7
+ result.ucl[0] // 25.6 — narrow at first, widens with t
+ result.ucl[7] // 25.9858 — approaching steady state
+ result.outOfControl // [7] — Z₇ exceeds UCL₇
+ // SampleEnd
+ assertEquals(25.0, result.smoothedValues[0], 1e-10)
+ assertEquals(26.2549248, result.smoothedValues[7], 1e-6)
+ assertEquals(25.6, result.ucl[0], 1e-10)
+ assertEquals(25.9858257971513, result.ucl[7], 1e-6)
+ assertTrue(result.outOfControl.contentEquals(intArrayOf(7)))
+ }
+
+ @Test
+ fun coreWesternElectricRules() {
+ // SampleStart
+ // Process drifting upward in the last four observations
+ val observations = doubleArrayOf(
+ 0.1, 0.2, -0.3, 0.0, 1.4, 1.2, 2.4, 2.6, 3.5, 2.2
+ )
+ val violations = westernElectricRules(observations, center = 0.0, sigma = 1.0)
+
+ violations.rule1 // indices of points beyond ±3σ
+ violations.rule2 // indices where 2 of last 3 points are beyond ±2σ (same side)
+ violations.rule3 // indices where 4 of last 5 points are beyond ±1σ (same side)
+ violations.rule4 // indices where 8 consecutive points fall on the same side
+ // SampleEnd
+ assertTrue(violations.rule1.isNotEmpty() || violations.rule2.isNotEmpty() ||
+ violations.rule3.isNotEmpty() || violations.rule4.isNotEmpty())
+ }
+
@Test
fun edaDistributionShape() {
val responseTimeMs = doubleArrayOf(
diff --git a/kstats-hypothesis/src/commonTest/kotlin/org/oremif/kstats/hypothesis/samples/DocsSamples.kt b/kstats-hypothesis/src/commonTest/kotlin/org/oremif/kstats/hypothesis/samples/DocsSamples.kt
index 67807fe..f829177 100644
--- a/kstats-hypothesis/src/commonTest/kotlin/org/oremif/kstats/hypothesis/samples/DocsSamples.kt
+++ b/kstats-hypothesis/src/commonTest/kotlin/org/oremif/kstats/hypothesis/samples/DocsSamples.kt
@@ -340,6 +340,49 @@ class DocsSamples {
// SampleEnd
}
+ @Test
+ fun hypGrubbsSingle() {
+ // SampleStart
+ // Response times (ms) with a suspected outlier
+ val latencies = doubleArrayOf(12.0, 14.0, 11.0, 13.0, 15.0, 98.0, 12.0)
+
+ val result = grubbsTest(latencies)
+ result.statistic // G statistic
+ result.pValue // Bonferroni-corrected p-value
+ result.additionalInfo["outlierIndex"] // index of the suspected outlier
+ result.additionalInfo["outlierValue"] // the suspected outlier's value
+ result.isSignificant() // true if outlier is significant at α = 0.05
+ // SampleEnd
+ }
+
+ @Test
+ fun hypGrubbsDirection() {
+ // SampleStart
+ // Only test for a suspiciously large value (upper tail)
+ val data = doubleArrayOf(2.1, 2.5, 2.3, 2.8, 10.0, 2.4, 2.2)
+ val upper = grubbsTest(data, alternative = Alternative.GREATER)
+ upper.additionalInfo["outlierValue"] // 10.0 — the maximum
+
+ // Only test for a suspiciously small value (lower tail)
+ val dataLow = doubleArrayOf(2.1, 2.5, 2.3, 2.8, -5.0, 2.4, 2.2)
+ val lower = grubbsTest(dataLow, alternative = Alternative.LESS)
+ lower.additionalInfo["outlierValue"] // -5.0 — the minimum
+ // SampleEnd
+ }
+
+ @Test
+ fun hypGrubbsIterative() {
+ // SampleStart
+ // Remove multiple outliers by repeatedly applying the test
+ val data = doubleArrayOf(10.0, 11.0, 12.0, 13.0, 14.0, 80.0, 90.0)
+ val cleaned = grubbsTestIterative(data, alpha = 0.05)
+
+ cleaned.outlierIndices // indices (in the original array) that were removed
+ cleaned.cleanedData // observations after removing all detected outliers
+ cleaned.iterations // TestResult from each round (last one is non-significant)
+ // SampleEnd
+ }
+
// =====================================================================
// choosing-a-distribution.mdx
// =====================================================================