Skip to content

Commit b68f076

Browse files
committed
fix: fix latex
1 parent 1491115 commit b68f076

File tree

1 file changed

+6
-6
lines changed

1 file changed

+6
-6
lines changed

_posts/2020-01-02-maximum_likelihood_estimation_statistical_modeling.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -59,7 +59,7 @@ The likelihood function is at the heart of MLE. It measures how likely the obser
5959

6060
$$ x_1, x_2, \dots, x_n $$
6161

62-
These observations are assumed to be drawn from some probability distribution, say $p(x | \theta)$, where $\theta$ represents the unknown parameters of the model. The likelihood function is the product of the probability density (or mass) functions for all observations:
62+
These observations are assumed to be drawn from some probability distribution, say $$p(x | \theta)$$, where $$\theta$$ represents the unknown parameters of the model. The likelihood function is the product of the probability density (or mass) functions for all observations:
6363

6464
$$ L(\theta) = p(x_1 \mid \theta) \times p(x_2 \mid \theta) \times \dots \times p(x_n \mid \theta) $$
6565

@@ -75,11 +75,11 @@ $$ \log L(\theta) = \sum_{i=1}^{n} \log p(x_i \mid \theta) $$
7575

7676
### 2.3 Maximization
7777

78-
The objective of MLE is to find the parameter values that maximize the log-likelihood function. This is typically done by taking the derivative of the log-likelihood with respect to the parameter $\theta$, setting it equal to zero, and solving for $\theta$:
78+
The objective of MLE is to find the parameter values that maximize the log-likelihood function. This is typically done by taking the derivative of the log-likelihood with respect to the parameter $$\theta$$, setting it equal to zero, and solving for $$\theta$$:
7979

8080
$$ \frac{\partial}{\partial \theta} \log L(\theta) = 0 $$
8181

82-
This solution gives the maximum likelihood estimate of $\theta$, which is denoted as $\hat{\theta}$.
82+
This solution gives the maximum likelihood estimate of $$\theta$$, which is denoted as $$\hat{\theta}$$.
8383

8484
## 3. Why MLE is Essential in Data Science
8585

@@ -303,16 +303,16 @@ Subclasses are expected to implement these methods.
303303

304304
#### Normal Distribution MLE (`MLENormal`):
305305

306-
- The `log_likelihood()` method computes the log-likelihood for the normal distribution given mean ($\mu$) and variance ($\sigma^2$).
306+
- The `log_likelihood()` method computes the log-likelihood for the normal distribution given mean ($$\mu$$) and variance ($$\sigma^2$$).
307307
- The `fit()` method estimates the parameters (mean and variance) using the following formulas:
308308

309309
$$ \hat{\mu} = \text{mean}(data) $$
310310
$$ \hat{\sigma^2} = \text{variance}(data) $$
311311

312312
#### Bernoulli Distribution MLE (`MLEBernoulli`):
313313

314-
- The `log_likelihood()` method computes the log-likelihood for the Bernoulli distribution based on the probability $p$ of success.
315-
- The `fit()` method estimates the probability $p$ using the formula:
314+
- The `log_likelihood()` method computes the log-likelihood for the Bernoulli distribution based on the probability $$p$$ of success.
315+
- The `fit()` method estimates the probability $$p$$ using the formula:
316316

317317
$$ \hat{p} = \text{mean}(data) $$
318318

0 commit comments

Comments
 (0)