From a793fe7f32ac9b4c9838b027134a27bd8283b148 Mon Sep 17 00:00:00 2001 From: Deepam Sarmah <88540910+deepam20050@users.noreply.github.com> Date: Tue, 22 Apr 2025 20:30:29 +0530 Subject: [PATCH] Fixed typo in dimensions of bias term --- chapter_linear-classification/softmax-regression.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/chapter_linear-classification/softmax-regression.md b/chapter_linear-classification/softmax-regression.md index 08a3ec117b..dc5f02a696 100644 --- a/chapter_linear-classification/softmax-regression.md +++ b/chapter_linear-classification/softmax-regression.md @@ -245,7 +245,7 @@ Assume that we are given a minibatch $\mathbf{X} \in \mathbb{R}^{n \times d}$ of $n$ examples with dimensionality (number of inputs) $d$. Moreover, assume that we have $q$ categories in the output. Then the weights satisfy $\mathbf{W} \in \mathbb{R}^{d \times q}$ -and the bias satisfies $\mathbf{b} \in \mathbb{R}^{1\times q}$. +and the bias satisfies $\mathbf{b} \in \mathbb{R}^{n \times q}$. $$ \begin{aligned} \mathbf{O} &= \mathbf{X} \mathbf{W} + \mathbf{b}, \\ \hat{\mathbf{Y}} & = \mathrm{softmax}(\mathbf{O}). \end{aligned} $$ :eqlabel:`eq_minibatch_softmax_reg`