Logistic Regression
Bernoulli Setting
Assume \(Y_i\) follows Bernoulli distribution given the \(i^{th}\) observation \(\boldsymbol{\mathbf{x}}_i\) and the parameters \(\boldsymbol{\mathbf{\beta}}\) \[ Y_i \mid \boldsymbol{\mathbf{x}}_i \sim \text{Bernoulli}(p_i) \] We can write the probability mass function for \(y_i\) as follows \[ \mathbb{P}(Y_i=y_i\mid \boldsymbol{\mathbf{x}}_i, p_i) = p_i^{y_i}(1 - p_i)^{1- y_i} \] We assume that the log-odds is a linear combination of the input \[ \ln\left(\frac{p_i}{1 - p_i}\right)= \boldsymbol{\mathbf{x}}_i^\top \boldsymbol{\mathbf{\beta}}\qquad \text{i.e.} \qquad p_i = \frac{1}{1 + \exp(-\boldsymbol{\mathbf{x}}_i^\top \boldsymbol{\mathbf{\beta}})} = \frac{\exp(\boldsymbol{\mathbf{x}}_i^\top \boldsymbol{\mathbf{\beta}})}{1 + \exp(\boldsymbol{\mathbf{x}}_i^\top \boldsymbol{\mathbf{\beta}})} = \sigma(\boldsymbol{\mathbf{x}}_i^\top\boldsymbol{\mathbf{\beta}}) \]
Joint Log-Likelihood
The joint likelihood is then found assuming IID-ness \[ \begin{align} p(\boldsymbol{\mathbf{y}}\mid \boldsymbol{\mathbf{\beta}}) &= \prod_{i=1}^n \mathbb{P}(Y_i=y_i\mid \boldsymbol{\mathbf{x}}_i, p_i) \\ &= \prod_{i=1}^n p_i^{y_i}(1 - p_i)^{1- y_i} \end{align} \] The log-likelihood is then \[ \begin{align} \ln p(\boldsymbol{\mathbf{y}}\mid \boldsymbol{\mathbf{\beta}}) &= \ln\left(\prod_{i=1}^n p_i^{y_i}(1 - p_i)^{1- y_i}\right) \\ &= \sum_{i=1}^n y_i \ln(p_i) + (1 - y_i)\ln(1 - p_i) \end{align} \] Alternatively, the likelihood can be written as follows \[ \begin{align} p(\boldsymbol{\mathbf{y}}\mid \boldsymbol{\mathbf{\beta}}) &= \prod_{i=1}^n p_i^{y_i}(1 - p_i)^{1- y_i} \\ &= \prod_{i=1}^n \left(\frac{\exp(\boldsymbol{\mathbf{x}}_i^\top \boldsymbol{\mathbf{\beta}})}{1 + \exp(\boldsymbol{\mathbf{x}}_i^\top \boldsymbol{\mathbf{\beta}})}\right)^{y_i} \left(1 - \frac{\exp(\boldsymbol{\mathbf{x}}_i^\top \boldsymbol{\mathbf{\beta}})}{1 + \exp(\boldsymbol{\mathbf{x}}_i^\top \boldsymbol{\mathbf{\beta}})}\right)^{1 - y_i} \\ &= \prod_{i=1}^n \left(\frac{\exp(\boldsymbol{\mathbf{x}}_i^\top \boldsymbol{\mathbf{\beta}})}{1 + \exp(\boldsymbol{\mathbf{x}}_i^\top \boldsymbol{\mathbf{\beta}})}\right)^{y_i} \left(\frac{1}{1 + \exp(\boldsymbol{\mathbf{x}}_i^\top \boldsymbol{\mathbf{\beta}})}\right)^{1 - y_i} \\ &= \prod_{i=1}^n \left(\frac{\exp(\boldsymbol{\mathbf{x}}_i^\top \boldsymbol{\mathbf{\beta}})}{1 + \exp(\boldsymbol{\mathbf{x}}_i^\top \boldsymbol{\mathbf{\beta}})}\right)^{y_i} \left(\frac{1}{1 + \exp(\boldsymbol{\mathbf{x}}_i^\top \boldsymbol{\mathbf{\beta}})}\right)\left(1 + \exp(\boldsymbol{\mathbf{x}}_i^\top\boldsymbol{\mathbf{\beta}})\right)^{y_i} \\ &= \prod_{i=1}^n \exp(\boldsymbol{\mathbf{x}}_i^\top \boldsymbol{\mathbf{\beta}}y_i)\frac{1}{\left(1 + \exp(\boldsymbol{\mathbf{x}}_i^\top \boldsymbol{\mathbf{\beta}})\right)^{y_i}} \left(\frac{1}{1 + \exp(\boldsymbol{\mathbf{x}}_i^\top \boldsymbol{\mathbf{\beta}})}\right)\left(1 + \exp(\boldsymbol{\mathbf{x}}_i^\top\boldsymbol{\mathbf{\beta}})\right)^{y_i} \\ &= \prod_{i=1}^n \frac{\exp(\boldsymbol{\mathbf{x}}_i^\top \boldsymbol{\mathbf{\beta}}y_i)}{1 + \exp(\boldsymbol{\mathbf{x}}_i^\top \boldsymbol{\mathbf{\beta}})} \end{align} \]
Taking the logarithm of this expression gives \[ \begin{align} \ln(p(\boldsymbol{\mathbf{y}}\mid\boldsymbol{\mathbf{\beta}})) &= \sum_{i=1}^n \ln\left(\frac{\exp(\boldsymbol{\mathbf{x}}_i^\top \boldsymbol{\mathbf{\beta}}y_i)}{1 + \exp(\boldsymbol{\mathbf{x}}_i^\top \boldsymbol{\mathbf{\beta}})}\right) \\ &= \sum_{i=1}^n \ln(\exp(\boldsymbol{\mathbf{x}}_i^\top \boldsymbol{\mathbf{\beta}}y_i)) - \ln(1 + \exp(\boldsymbol{\mathbf{x}}_i^\top \boldsymbol{\mathbf{\beta}})) \\ &= \sum_{i=1}^n \boldsymbol{\mathbf{x}}_i^\top \boldsymbol{\mathbf{\beta}}y_i - \ln(1 + \exp(\boldsymbol{\mathbf{x}}_i^\top \boldsymbol{\mathbf{\beta}})) \end{align} \]
Maximum Likelihood \(\equiv\) Minimize Loss Function
Our aim is to maximize the likelihood. This is equivalent to maximizing the log likelihood, which is equivalent to minimizing the negative log likelihood. \[ \min_{\boldsymbol{\mathbf{\beta}}} -\sum_{i=1}^n y_i \ln(p_i) + (1 - y_i)\ln(1 - p_i) \] Let’s consider the expression inside the summation \[ y_i \ln(p_i) + (1 - y_i)\ln(1 - p_i) \] We can notice that \[ y_i \ln(p_i) + (1 - y_i)\ln(1 - p_i) = \begin{cases} \ln(1 - p_i) && \text{if } y_i=0\\ \ln(p_i) && \text{if } y_i = 1 \end{cases} \] Remember that \(p_i = \sigma(\boldsymbol{\mathbf{x}}_i^\top \boldsymbol{\mathbf{\beta}})\) and that \(1 - \sigma(\boldsymbol{\mathbf{x}}_i^\top \boldsymbol{\mathbf{\beta}}) = \sigma(-\boldsymbol{\mathbf{x}}_i^\top\boldsymbol{\mathbf{\beta}})\) \[ y_i \ln(\sigma(\boldsymbol{\mathbf{x}}_i^\top \boldsymbol{\mathbf{\beta}})) + (1 - y_i)\ln(\sigma(-\boldsymbol{\mathbf{x}}_i^\top\boldsymbol{\mathbf{\beta}})) = \begin{cases} \ln(\sigma(-\boldsymbol{\mathbf{x}}_i^\top\boldsymbol{\mathbf{\beta}})) && \text{if } y_i=0\\ \ln(\sigma(\boldsymbol{\mathbf{x}}_i^\top \boldsymbol{\mathbf{\beta}})) && \text{if } y_i = 1 \end{cases} \] We are thus looking for something that is \(-1\) when \(y_i=0\) and that it is \(1\) when \(y_i=1\). In particular, notice that \(2 y_i - 1\) does the job. Thus we can write our problem as \[ \begin{align} \min_{\boldsymbol{\mathbf{\beta}}} -\sum_{i=1}^n \ln\left(\sigma((2y_i - 1)\boldsymbol{\mathbf{x}}_i^\top \boldsymbol{\mathbf{\beta}})\right) &= \min_{\boldsymbol{\mathbf{\beta}}}\sum_{i=1}^n - \ln\left(\frac{1}{1 + \exp((1 - 2y_i)\boldsymbol{\mathbf{x}}_i^\top\boldsymbol{\mathbf{\beta}})}\right) \\ &= \min_{\boldsymbol{\mathbf{\beta}}}\sum_{i=1}^n\ln\left(1 + \exp((1 - 2y_i)\boldsymbol{\mathbf{x}}_i^\top\boldsymbol{\mathbf{\beta}})\right) \\ &= \min_{\boldsymbol{\mathbf{\beta}}} \sum_{i=1}^n L(\boldsymbol{\mathbf{x}}_i^\top, y_i ; \boldsymbol{\mathbf{\beta}}) \end{align} \] where the loss incurred using parameter \(\boldsymbol{\mathbf{\beta}}\) when predicting the label for \(\boldsymbol{\mathbf{x}}_i^\top\) whose true label is \(y_i\) is \[ L(\boldsymbol{\mathbf{x}}_i^\top, y_i; \boldsymbol{\mathbf{\beta}}) = \begin{cases} \ln\left(1 + \exp(\boldsymbol{\mathbf{x}}_i^\top \boldsymbol{\mathbf{\beta}})\right) && \text{if } y_i= 0\\ \ln\left(1 + \exp(-\boldsymbol{\mathbf{x}}_i^\top \boldsymbol{\mathbf{\beta}}\right) && \text{if } y_i = 1 \end{cases} \]
Maximum-A-Posteriori (MAP)
Recall from Bayes Rule \[ p(\boldsymbol{\mathbf{\beta}}\mid \boldsymbol{\mathbf{y}}) = \frac{p(\boldsymbol{\mathbf{y}}\mid \boldsymbol{\mathbf{\beta}})p(\boldsymbol{\mathbf{\beta}})}{p(\boldsymbol{\mathbf{y}})} \propto p(\boldsymbol{\mathbf{y}}\mid \boldsymbol{\mathbf{\beta}}) p(\boldsymbol{\mathbf{\beta}}) \] Taking the logarithm both sides and multiplying by \(-1\) we obtain \[ -\ln(p(\boldsymbol{\mathbf{\beta}}\mid \boldsymbol{\mathbf{y}})) \propto -\ln(p(\boldsymbol{\mathbf{y}}\mid \boldsymbol{\mathbf{\beta}})) - \ln(p(\boldsymbol{\mathbf{\beta}})) \] We can choose to use a Gaussian Prior on the parameters \(p(\boldsymbol{\mathbf{\beta}}) = N(\boldsymbol{\mathbf{\mu}}_0, \boldsymbol{\mathbf{\Sigma}}_0)\). If \(\boldsymbol{\mathbf{\beta}}\in\mathbb{R}^{p\times 1}\) then \[ \ln(p(\boldsymbol{\mathbf{\beta}})) = -\frac{p}{2}\ln(2\pi) - \frac{1}{2}\ln(|\Sigma_0|)-\frac{1}{2}(\boldsymbol{\mathbf{\beta}}- \boldsymbol{\mathbf{\mu}}_0)^\top \boldsymbol{\mathbf{\Sigma}}_0^{-1}(\boldsymbol{\mathbf{\beta}}- \boldsymbol{\mathbf{\mu}}_0) \] Plugging this in, we obtain the following. Notice how we neglect terms that do not depend on \(\boldsymbol{\mathbf{\beta}}\) because they will not matter when we minimize this with respect to \(\boldsymbol{\mathbf{\beta}}\). \[ \begin{align} \min_{\boldsymbol{\mathbf{\beta}}} -\ln(p(\boldsymbol{\mathbf{\beta}}\mid \boldsymbol{\mathbf{y}})) &= \min_{\boldsymbol{\mathbf{\beta}}} -\ln(p(\boldsymbol{\mathbf{y}}\mid \boldsymbol{\mathbf{\beta}})) - \ln(p(\boldsymbol{\mathbf{\beta}})) \\ &= \min_{\boldsymbol{\mathbf{\beta}}} -\ln(p(\boldsymbol{\mathbf{y}}\mid \boldsymbol{\mathbf{\beta}})) + \frac{p}{2}\ln(2\pi) + \frac{1}{2}\ln(|\Sigma_0|) +\frac{1}{2}(\boldsymbol{\mathbf{\beta}}- \boldsymbol{\mathbf{\mu}}_0)^\top \boldsymbol{\mathbf{\Sigma}}_0^{-1}(\boldsymbol{\mathbf{\beta}}- \boldsymbol{\mathbf{\mu}}_0) \\ &= \min_{\boldsymbol{\mathbf{\beta}}} -\ln(p(\boldsymbol{\mathbf{y}}\mid \boldsymbol{\mathbf{\beta}})) + \frac{1}{2}(\boldsymbol{\mathbf{\beta}}- \boldsymbol{\mathbf{\mu}}_0)^\top \boldsymbol{\mathbf{\Sigma}}_0^{-1}(\boldsymbol{\mathbf{\beta}}- \boldsymbol{\mathbf{\mu}}_0) \\ &= \min_{\boldsymbol{\mathbf{\beta}}} \sum_{i=1}^n\ln\left(1 + \exp((1 - 2y_i)\boldsymbol{\mathbf{x}}_i^\top\boldsymbol{\mathbf{\beta}})\right) + \frac{1}{2}(\boldsymbol{\mathbf{\beta}}- \boldsymbol{\mathbf{\mu}}_0)^\top \boldsymbol{\mathbf{\Sigma}}_0^{-1}(\boldsymbol{\mathbf{\beta}}- \boldsymbol{\mathbf{\mu}}_0) \end{align} \]
Ridge Regularization \(\equiv\) Isotripic Gaussian Prior
Now if we set \(\boldsymbol{\mathbf{\mu}}_0 = \boldsymbol{\mathbf{0}}_{p}\) and \(\boldsymbol{\mathbf{\Sigma}}_0 = \sigma^2_{\boldsymbol{\mathbf{\beta}}} I_p\) (this is equivalent to setting a univariate normal prior on each coefficient, with \(p(\beta_j) = N(0, \sigma_{\boldsymbol{\mathbf{\beta}}}^2)\)) we have \[ \min_{\boldsymbol{\mathbf{\beta}}} \sum_{i=1}^n\ln\left(1 + \exp((1 - 2y_i)\boldsymbol{\mathbf{x}}_i^\top\boldsymbol{\mathbf{\beta}})\right) + \frac{1}{2}\boldsymbol{\mathbf{\beta}}^\top (\sigma^2_{\boldsymbol{\mathbf{\beta}}} I_p)^{-1}\boldsymbol{\mathbf{\beta}} \] using the fact that for an invertible matrix \(A\) and a non-zero constant \(c\in\mathbb{R}\backslash\{0\}\) we have \((cA)^{-1} = \frac{1}{c}A^{-1}\) we obtain \[ \min_{\boldsymbol{\mathbf{\beta}}} \sum_{i=1}^n\ln\left(1 + \exp((1 - 2y_i)\boldsymbol{\mathbf{x}}_i^\top\boldsymbol{\mathbf{\beta}})\right) + \frac{1}{2 \sigma^2_{\boldsymbol{\mathbf{\beta}}}}\boldsymbol{\mathbf{\beta}}^\top\boldsymbol{\mathbf{\beta}} \] Setting \(\lambda:=\frac{1}{\sigma^2_{\boldsymbol{\mathbf{\beta}}}}\) we have regularized logistic regression \[ \min_{\boldsymbol{\mathbf{\beta}}} \sum_{i=1}^n\ln\left(1 + \exp((1 - 2y_i)\boldsymbol{\mathbf{x}}_i^\top\boldsymbol{\mathbf{\beta}})\right) + \frac{\lambda}{2}\boldsymbol{\mathbf{\beta}}^\top\boldsymbol{\mathbf{\beta}} \] It is more stable to multiply out by \(\sigma^2_{\boldsymbol{\mathbf{\beta}}}\) so we get \[ \min_{\boldsymbol{\mathbf{\beta}}} \sigma^2_{\boldsymbol{\mathbf{\beta}}}\sum_{i=1}^n\ln\left(1 + \exp((1 - 2y_i)\boldsymbol{\mathbf{x}}_i^\top\boldsymbol{\mathbf{\beta}})\right) + \frac{1}{2}\boldsymbol{\mathbf{\beta}}^\top\boldsymbol{\mathbf{\beta}} \]
Ridge Regularization except on Intercept
Where
\[
\boldsymbol{\mathbf{\beta}}=
\begin{pmatrix}
\beta_0\\
\beta_1 \\
\vdots \\
\beta_{p-1}
\end{pmatrix} :=
\begin{pmatrix}
\beta_0 \\
\boldsymbol{\mathbf{\beta}}_{1:p-1}
\end{pmatrix}
\]
We’ve defined \(\boldsymbol{\mathbf{\beta}}_{1:p-1}:=(\beta_1, \ldots, \beta_p)^\top\) because often we don’t really regularize the intercept. This means that we place a Multivariate Gaussian prior on \(\boldsymbol{\mathbf{\beta}}_{1:p-1}\) as follows \(p(\boldsymbol{\mathbf{\beta}}_{1:p-1}) = N(\boldsymbol{\mathbf{0}}_{p-1}, \sigma_{\boldsymbol{\mathbf{\beta}}_{1:p-1}}^2 I_{p-1})\) (again, this is equivalent to putting a univariate normal prior on each of \(\beta_1, \ldots, \beta_{p-1}\) with \(p(\beta_j) = N(0, \sigma^2_{\boldsymbol{\mathbf{\beta}}_{1:p-1}})\)). Instead, on \(\beta_0\) we could place a uniform prior, which means it’s value would not depend on \(\beta_0\) and so could be dropped from the expression.
\[
\min_{\boldsymbol{\mathbf{\beta}}} \sum_{i=1}^n\ln\left(1 + \exp((1 - 2y_i)\boldsymbol{\mathbf{x}}_i^\top\boldsymbol{\mathbf{\beta}})\right) + \frac{1}{2 \sigma_{\boldsymbol{\mathbf{\beta}}_{1:p-1}}^2}\boldsymbol{\mathbf{\beta}}_{1:p-1}^\top\boldsymbol{\mathbf{\beta}}_{1:p-1}
\]
It is more stable to multiply out by \(\sigma^2_{\boldsymbol{\mathbf{\beta}}_{1:p-1}}\) therefore
\[
\min_{\boldsymbol{\mathbf{\beta}}} \sigma_{\boldsymbol{\mathbf{\beta}}_{1:p-1}}^2\sum_{i=1}^n\ln\left(1 + \exp((1 - 2y_i)\boldsymbol{\mathbf{x}}_i^\top\boldsymbol{\mathbf{\beta}})\right) + \frac{1}{2}\boldsymbol{\mathbf{\beta}}_{1:p-1}^\top\boldsymbol{\mathbf{\beta}}_{1:p-1}
\]
Notice that this is consistent with the implementation used by scikit-learn
provided here.
Full-Bayesian (Laplace Approximation)
Full-Bayesian in intractable. Laplace Approximation approximates \(p(\boldsymbol{\mathbf{\beta}}\mid \boldsymbol{\mathbf{y}})\) with a Gaussian distribution \(q(\boldsymbol{\mathbf{\beta}})\). To find such a distribution, we use the multivariate version of Taylor’s expansion to expand the log posterior around its mode \(\boldsymbol{\mathbf{\beta}}_{\text{MAP}}\). We take a second order approximation \[ \ln p(\boldsymbol{\mathbf{\beta}}\mid \boldsymbol{\mathbf{y}}) \simeq \ln p(\boldsymbol{\mathbf{\beta}}_{\text{MAP}}\mid \boldsymbol{\mathbf{y}}) + \nabla\ln p(\boldsymbol{\mathbf{\beta}}_{\text{MAP}}\mid \boldsymbol{\mathbf{y}})(\boldsymbol{\mathbf{\beta}}- \boldsymbol{\mathbf{\beta}}_{\text{MAP}}) + \frac{1}{2}(\boldsymbol{\mathbf{\beta}}- \boldsymbol{\mathbf{\beta}}_{\text{MAP}})^\top \nabla^2 \ln p(\boldsymbol{\mathbf{\beta}}_{\text{MAP}}\mid \boldsymbol{\mathbf{y}}) (\boldsymbol{\mathbf{\beta}}- \boldsymbol{\mathbf{\beta}}_{\text{MAP}}) \] Since \(\boldsymbol{\mathbf{\beta}}_{\text{MAP}}\) is a stationary point, the gradient at this point will be zero so we have \[ \ln p(\boldsymbol{\mathbf{\beta}}\mid \boldsymbol{\mathbf{y}}) \simeq \ln p(\boldsymbol{\mathbf{\beta}}_{\text{MAP}}\mid \boldsymbol{\mathbf{y}}) + \frac{1}{2}(\boldsymbol{\mathbf{\beta}}- \boldsymbol{\mathbf{\beta}}_{\text{MAP}})^\top \nabla^2 \ln p(\boldsymbol{\mathbf{\beta}}_{\text{MAP}}\mid \boldsymbol{\mathbf{y}}) (\boldsymbol{\mathbf{\beta}}- \boldsymbol{\mathbf{\beta}}_{\text{MAP}}) \] We take the exponential both sides to obtain \[ p(\boldsymbol{\mathbf{\beta}}\mid \boldsymbol{\mathbf{y}}) \simeq p(\boldsymbol{\mathbf{\beta}}_{\text{MAP}}\mid \boldsymbol{\mathbf{y}}) \exp\left(\frac{1}{2}(\boldsymbol{\mathbf{\beta}}- \boldsymbol{\mathbf{\beta}}_{\text{MAP}})^\top \nabla^2 \ln p(\boldsymbol{\mathbf{\beta}}_{\text{MAP}}\mid \boldsymbol{\mathbf{y}}) (\boldsymbol{\mathbf{\beta}}- \boldsymbol{\mathbf{\beta}}_{\text{MAP}})\right) \] We regnize this has the shape of a Multivariate Normal distribution. We therefore define our Laplace approximation to be \[ q(\boldsymbol{\mathbf{\beta}}) = (2\pi)^{-\frac{p}{2}}\text{det}\left(-\nabla^2\ln p(\boldsymbol{\mathbf{\beta}}_{\text{MAP}}\mid \boldsymbol{\mathbf{y}})\right) \exp\left(\frac{1}{2}(\boldsymbol{\mathbf{\beta}}- \boldsymbol{\mathbf{\beta}}_{\text{MAP}})^\top \nabla^2 \ln p(\boldsymbol{\mathbf{\beta}}_{\text{MAP}}\mid \boldsymbol{\mathbf{y}}) (\boldsymbol{\mathbf{\beta}}- \boldsymbol{\mathbf{\beta}}_{\text{MAP}})\right) \] That is \[ q(\boldsymbol{\mathbf{\beta}}) = N\left(\boldsymbol{\mathbf{\beta}}_{\text{MAP}}, \left[-\nabla^2 \ln p(\boldsymbol{\mathbf{\beta}}_{\text{MAP}}\mid \boldsymbol{\mathbf{y}})\right]^{-1}\right) \] To find \(\nabla_{\boldsymbol{\mathbf{\beta}}}^2 \ln p(\boldsymbol{\mathbf{\beta}}\mid \boldsymbol{\mathbf{y}})\biggr\rvert_{\boldsymbol{\mathbf{\beta}}=\boldsymbol{\mathbf{\beta}}_{\text{MAP}}}\) we write \[ \nabla_{\boldsymbol{\mathbf{\beta}}}^2 \ln p(\boldsymbol{\mathbf{\beta}}\mid \boldsymbol{\mathbf{y}}) = \nabla_{\boldsymbol{\mathbf{\beta}}}^2 \ln p(\boldsymbol{\mathbf{y}}\mid \boldsymbol{\mathbf{\beta}}) + \nabla_{\boldsymbol{\mathbf{\beta}}}^2 \ln p(\boldsymbol{\mathbf{\beta}}) \] and we find each of the expressions on the right-hand side separately. We start with \(\nabla_{\boldsymbol{\mathbf{\beta}}}^2 \ln p(\boldsymbol{\mathbf{\beta}})\). Recall that if we have a quadratic form \(\boldsymbol{\mathbf{x}}^\top A \boldsymbol{\mathbf{x}}\) the derivative of this quadratic form with respect to \(\boldsymbol{\mathbf{x}}\) is given by \(\boldsymbol{\mathbf{x}}^\top (A + A^\top)\). Applying this to our case, and using the fact that \(\boldsymbol{\mathbf{\Sigma}}_0^{-1}\) is symmetric we have \[ \nabla_{\boldsymbol{\mathbf{\beta}}} \ln p(\boldsymbol{\mathbf{\beta}}) = -\frac{1}{2} (\boldsymbol{\mathbf{\beta}}- \boldsymbol{\mathbf{\mu}}_0)^\top 2\boldsymbol{\mathbf{\Sigma}}_0^{-1} = -(\boldsymbol{\mathbf{\beta}}- \boldsymbol{\mathbf{\mu}}_0)^\top \boldsymbol{\mathbf{\Sigma}}_0^{-1} = -\boldsymbol{\mathbf{\beta}}^\top \boldsymbol{\mathbf{\Sigma}}_0^{-1} + \boldsymbol{\mathbf{\mu}}_0^\top \boldsymbol{\mathbf{\Sigma}}_0^{-1} \] Taking the derivative with respect to \(\boldsymbol{\mathbf{\beta}}\) again we get \[ -\nabla^2_{\boldsymbol{\mathbf{\beta}}} \ln p(\boldsymbol{\mathbf{\beta}}) = \boldsymbol{\mathbf{\Sigma}}_0^{-1} \] To find \(\nabla_{\boldsymbol{\mathbf{\beta}}} \ln p(\boldsymbol{\mathbf{y}}\mid \boldsymbol{\mathbf{\beta}})\) we take the derivative componentwise \[ \begin{align} \frac{\partial}{\partial \beta_j} \ln p(\boldsymbol{\mathbf{y}}\mid \boldsymbol{\mathbf{\beta}}) &= \sum_{i=1}^n y_i \partial_{\beta_j}\ln \sigma(\boldsymbol{\mathbf{x}}_i^\top \boldsymbol{\mathbf{\beta}}) + (1 - y_i)\partial_{\beta_j} \ln \sigma(-\boldsymbol{\mathbf{x}}_i^\top \boldsymbol{\mathbf{\beta}}) \\ &= \sum_{i=1}^n y_i \frac{\sigma(\boldsymbol{\mathbf{x}}_i^\top \boldsymbol{\mathbf{\beta}})(1 - \sigma(\boldsymbol{\mathbf{x}}_i^\top \boldsymbol{\mathbf{\beta}}))}{\sigma(\boldsymbol{\mathbf{x}}_i^\top \boldsymbol{\mathbf{\beta}})} x_i^{(j)} + (1 - y_i) \frac{\sigma(-\boldsymbol{\mathbf{x}}_i^\top \boldsymbol{\mathbf{\beta}})(1 - \sigma(-\boldsymbol{\mathbf{x}}_i^\top \boldsymbol{\mathbf{\beta}}))}{\sigma(-\boldsymbol{\mathbf{x}}_i^\top \boldsymbol{\mathbf{\beta}})}(-x_i^{(j)}) \\ &= \sum_{i=1}^n y_ix_i^{(j)}\sigma(-\boldsymbol{\mathbf{x}}_i^\top \boldsymbol{\mathbf{\beta}}) + (y_i x_i^{(j)} - x_i^{(j)})\sigma(\boldsymbol{\mathbf{x}}_i^\top \boldsymbol{\mathbf{\beta}}) \end{align} \] Now take the derivative with respect to \(\beta_k\) \[ \begin{align} \partial_{\beta_k}\partial_{\beta_j} \ln p(\boldsymbol{\mathbf{y}}\mid \boldsymbol{\mathbf{\beta}}) &= \sum_{i=1}^n y_ix_i^{(j)}\partial_{\beta_k}\sigma(-\boldsymbol{\mathbf{x}}_i^\top \boldsymbol{\mathbf{\beta}}) + (y_i x_i^{(j)} - x_i^{(j)})\partial_{\beta_k}\sigma(\boldsymbol{\mathbf{x}}_i^\top \boldsymbol{\mathbf{\beta}}) \\ &= \sum_{i=1}^n y_ix_i^{(j)}\sigma(-\boldsymbol{\mathbf{x}}_i^\top \boldsymbol{\mathbf{\beta}})(1 - \sigma(-\boldsymbol{\mathbf{x}}_i^\top \boldsymbol{\mathbf{\beta}}))(-x_i^{(k)}) + (y_i x_i^{(j)} - x_i^{(j)})\sigma(\boldsymbol{\mathbf{x}}_i^\top \boldsymbol{\mathbf{\beta}})(1 - \sigma(\boldsymbol{\mathbf{x}}_i^\top \boldsymbol{\mathbf{\beta}})) x_i^{(k)} \\ &= \sum_{i=1}^n -y_ix_i^{(j)}x_i^{(k)} \sigma(\boldsymbol{\mathbf{x}}_i^\top \boldsymbol{\mathbf{\beta}})(1 - \sigma(\boldsymbol{\mathbf{x}}_i^\top \boldsymbol{\mathbf{\beta}})) + (y_i x_i^{(j)} x_i^{(k)} - x_i^{(j)} x_i^{(k)})\sigma(\boldsymbol{\mathbf{x}}_i^\top \boldsymbol{\mathbf{\beta}}) (1 - \sigma(\boldsymbol{\mathbf{x}}_i^\top \boldsymbol{\mathbf{\beta}}))\\ &= -\sum_{i=1}^n x_i^{(j)} x_i^{(k)} \sigma(\boldsymbol{\mathbf{x}}_i^\top \boldsymbol{\mathbf{\beta}})(1 - \sigma(\boldsymbol{\mathbf{x}}_i^\top \boldsymbol{\mathbf{\beta}})) \end{align} \] This tells us that \[ [\nabla^2_{\boldsymbol{\mathbf{\beta}}} \ln p(\boldsymbol{\mathbf{y}}\mid \boldsymbol{\mathbf{\beta}})]_{kj} = -\sum_{i=1}^n x_i^{(j)} x_i^{(k)} \sigma(\boldsymbol{\mathbf{x}}_i^\top \boldsymbol{\mathbf{\beta}})(1 - \sigma(\boldsymbol{\mathbf{x}}_i^\top \boldsymbol{\mathbf{\beta}})) \] Note that for a vector \(\boldsymbol{\mathbf{x}}_i :=(1, x_{i}^{(1)}, \ldots, x_{i}^{(p-1)})^\top\) the outer product gives the following symmetric matrix \[ \boldsymbol{\mathbf{x}}_i \boldsymbol{\mathbf{x}}_i^\top = \begin{pmatrix} 1 \\ x_{i}^{(1)} \\ \vdots \\ x_{i}^{(p-1)} \end{pmatrix} \begin{pmatrix} 1 & x_{i}^{(1)} & \cdots & x_{i}^{(p-1)} \end{pmatrix} = \begin{pmatrix} 1 & x_{i}^{(1)} & \cdots & x_{i}^{(p-1)} \\ x_{i}^{(1)} & (x_{i}^{(1)})^2 & \cdots & x_{i}^{(1)} x_{i}^{(p-1)}\\ \vdots & \vdots & \ddots & \vdots\\ x_{i}^{(p-1)} & x_{i}^{(p-1)}x_i^{(1)} & \cdots & (x_{i}^{(p-1)})^2 \end{pmatrix} \] In particular \([\boldsymbol{\mathbf{x}}_i \boldsymbol{\mathbf{x}}_i^\top]_{kj} = x_i^{(j)}x_i^{(k)}\) so that \[ - \nabla_{\boldsymbol{\mathbf{\beta}}}^2 \ln p(\boldsymbol{\mathbf{y}}\mid \boldsymbol{\mathbf{\beta}}) = \sum_{i=1}^n \sigma(\boldsymbol{\mathbf{x}}_i^\top \boldsymbol{\mathbf{\beta}})(1 - \sigma(\boldsymbol{\mathbf{x}}_i^\top \boldsymbol{\mathbf{\beta}})) \boldsymbol{\mathbf{x}}_i\boldsymbol{\mathbf{x}}_i^\top \] Putting everything together we get \[ -\nabla_{\boldsymbol{\mathbf{\beta}}}^2 \ln p(\boldsymbol{\mathbf{\beta}}\mid \boldsymbol{\mathbf{y}}) = \boldsymbol{\mathbf{\Sigma}}_0^{-1} + \sum_{i=1}^n \sigma(\boldsymbol{\mathbf{x}}_i^\top \boldsymbol{\mathbf{\beta}})(1 - \sigma(\boldsymbol{\mathbf{x}}_i^\top \boldsymbol{\mathbf{\beta}})) \boldsymbol{\mathbf{x}}_i\boldsymbol{\mathbf{x}}_i^\top \]
Gradient Ascent Optimization (MLE, No Regularization)
The simplest way to find \(\boldsymbol{\mathbf{\beta}}\) that maximizes the likelihood is to do gradient ascent. Remember that when working on the Laplace approximation we found the derivative of the log-likelihood with respect to the \(j^{th}\) component of \(\boldsymbol{\mathbf{\beta}}\). We can rearrange such an expression to get a nicer form. \[ \begin{align} \frac{\partial}{\partial \beta_j} \ln p(\boldsymbol{\mathbf{y}}\mid \boldsymbol{\mathbf{\beta}}) &= \sum_{i=1}^n y_ix_i^{(j)}\sigma(-\boldsymbol{\mathbf{x}}_i^\top \boldsymbol{\mathbf{\beta}}) + (y_i x_i^{(j)} - x_i^{(j)})\sigma(\boldsymbol{\mathbf{x}}_i^\top \boldsymbol{\mathbf{\beta}}) \\ &= \sum_{i=1}^n y_ix_i^{(j)}(1 - \sigma(\boldsymbol{\mathbf{x}}_i^\top \boldsymbol{\mathbf{\beta}})) + (y_i x_i^{(j)} - x_i^{(j)})\sigma(\boldsymbol{\mathbf{x}}_i^\top \boldsymbol{\mathbf{\beta}}) \\ &= \sum_{i=1}^n y_ix_i^{(j)} - y_ix_i^{(j)}\sigma(\boldsymbol{\mathbf{x}}_i^\top \boldsymbol{\mathbf{\beta}}) + y_i x_i^{(j)}\sigma(\boldsymbol{\mathbf{x}}_i^\top \boldsymbol{\mathbf{\beta}}) - x_i^{(j)}\sigma(\boldsymbol{\mathbf{x}}_i^\top \boldsymbol{\mathbf{\beta}}) \\ &= \sum_{i=1}^n \left[y_i - \sigma(\boldsymbol{\mathbf{x}}_i^\top \boldsymbol{\mathbf{\beta}})\right]x_i^{(j)} \end{align} \] Therefore the full gradient is given by \[ \begin{align} \nabla_{\boldsymbol{\mathbf{\beta}}} \ln p(\boldsymbol{\mathbf{y}}\mid\boldsymbol{\mathbf{\beta}}) &= \left(\frac{\partial}{\partial \beta_0} \ln p(\boldsymbol{\mathbf{y}}\mid \boldsymbol{\mathbf{\beta}}), \ldots, \frac{\partial}{\partial \beta_{p-1}} \ln p(\boldsymbol{\mathbf{y}}\mid \boldsymbol{\mathbf{\beta}})\right)^\top \\ &= \left(\sum_{i=1}^n(y_i - \sigma(\boldsymbol{\mathbf{x}}_i^\top \boldsymbol{\mathbf{\beta}}))x_i^{(0)}, \ldots, \sum_{i=1}^n(y_i - \sigma(\boldsymbol{\mathbf{x}}_i^\top \boldsymbol{\mathbf{\beta}}))x_i^{(p-1)}\right)^\top \\ &= \sum_{i=1}^n(y_i - \sigma(\boldsymbol{\mathbf{x}}_i^\top \boldsymbol{\mathbf{\beta}}))\left(x_i^{(0)}, \ldots, x_i^{(p-1)}\right) \\ &= \sum_{i=1}^n(y_i - \sigma(\boldsymbol{\mathbf{x}}_i^\top \boldsymbol{\mathbf{\beta}})) \boldsymbol{\mathbf{x}}_i \end{align} \] Gradient ascent then becomes \[ \boldsymbol{\mathbf{\beta}}_{k+1} \leftarrow \boldsymbol{\mathbf{\beta}}_{k} + \gamma \nabla_{\boldsymbol{\mathbf{\beta}}}\ln p(\boldsymbol{\mathbf{y}}\mid\boldsymbol{\mathbf{\beta}}) = \boldsymbol{\mathbf{\beta}}_k + \gamma \sum_{i=1}^n(y_i - \sigma(\boldsymbol{\mathbf{x}}_i^\top \boldsymbol{\mathbf{\beta}})) \boldsymbol{\mathbf{x}}_i \] This can be vectorized when programming as follows \[ \boldsymbol{\mathbf{\beta}}_{k+1} \leftarrow \boldsymbol{\mathbf{\beta}}_k + \gamma X^\top(\boldsymbol{\mathbf{y}}- \sigma(X\boldsymbol{\mathbf{\beta}})) \] where \(X\in\mathbb{R}^{n\times p}\) is the design matrix. \[ X = \begin{pmatrix} \boldsymbol{\mathbf{x}}_1^\top \\ \vdots \\ \boldsymbol{\mathbf{x}}_n^\top \end{pmatrix} \]
One can change the step size at every iteration. One possible choice for \(\gamma_k\) is as follows \[ \gamma_k = \frac{|(\boldsymbol{\mathbf{\beta}}_k - \boldsymbol{\mathbf{\beta}}_{k-1})^\top [\nabla \ln p(\boldsymbol{\mathbf{y}}\mid \boldsymbol{\mathbf{\beta}}_k) - \nabla\ln p(\boldsymbol{\mathbf{y}}\mid \boldsymbol{\mathbf{\beta}}_{k-1})]|}{\parallel\nabla\ln p(\boldsymbol{\mathbf{y}}\mid \boldsymbol{\mathbf{\beta}}_k) - \nabla \ln p(\boldsymbol{\mathbf{y}}\mid \boldsymbol{\mathbf{\beta}}_{k-1}\parallel^2} \]
Newton’s Method (MLE, No Regularization)
Again, during the Laplace approximation section we found that the Hessian is given by \[ \nabla_{\boldsymbol{\mathbf{\beta}}}^2 \ln p(\boldsymbol{\mathbf{y}}\mid \boldsymbol{\mathbf{\beta}}) = -\sum_{i=1}^n \sigma(\boldsymbol{\mathbf{x}}_i^\top \boldsymbol{\mathbf{\beta}})(1 - \sigma(\boldsymbol{\mathbf{x}}_i^\top \boldsymbol{\mathbf{\beta}})) \boldsymbol{\mathbf{x}}_i\boldsymbol{\mathbf{x}}_i^\top \] This expression can be vectorized as follows \[ \nabla_{\boldsymbol{\mathbf{\beta}}}^2 \ln p(\boldsymbol{\mathbf{y}}\mid \boldsymbol{\mathbf{\beta}}) = - X^\top D X \] where \[ D = \text{diag}(\sigma(X\boldsymbol{\mathbf{\beta}})(1 - \sigma(X\boldsymbol{\mathbf{\beta}}))) = \begin{pmatrix} \sigma(\boldsymbol{\mathbf{x}}_1^\top\boldsymbol{\mathbf{\beta}})(1 - \sigma(\boldsymbol{\mathbf{x}}_1^\top\boldsymbol{\mathbf{\beta}})) & 0 & \ldots & 0 \\ 0 & \sigma(\boldsymbol{\mathbf{x}}_2^\top\boldsymbol{\mathbf{\beta}})(1 - \sigma(\boldsymbol{\mathbf{x}}_2^\top\boldsymbol{\mathbf{\beta}})) & \ldots & 0 \\ \vdots & \ldots & \ddots & \vdots \\ 0 & 0 & \ldots &\sigma(\boldsymbol{\mathbf{x}}_n^\top\boldsymbol{\mathbf{\beta}})(1 - \sigma(\boldsymbol{\mathbf{x}}_n^\top\boldsymbol{\mathbf{\beta}})) \end{pmatrix} \] Newton’s method then updates the weights as follows (where \(\alpha\) is a learning rate to control convergence) \[ \boldsymbol{\mathbf{\beta}}_{k+1} \leftarrow \boldsymbol{\mathbf{\beta}}_k + \alpha(X^\top D X)^{-1} X^\top(\boldsymbol{\mathbf{y}}- \sigma(X\boldsymbol{\mathbf{\beta}}_k)) \]
Of course in practice we never invert the matrix but rather compute the direction \(\boldsymbol{\mathbf{d}}\) by solving the linear system \[ X^\top D X\boldsymbol{\mathbf{d}}_k = \alpha X^\top(\boldsymbol{\mathbf{y}}- \sigma(X\boldsymbol{\mathbf{\beta}}_k)) \] and then we find the next iterate as \[ \boldsymbol{\mathbf{\beta}}_{k+1}\leftarrow \boldsymbol{\mathbf{\beta}}_k + \boldsymbol{\mathbf{d}}_k \]
Gradient Ascent (MAP, Ridge Regularization)
We want to maximize \[ \ln p(\boldsymbol{\mathbf{\beta}}\mid \boldsymbol{\mathbf{y}}) = \ln p(\boldsymbol{\mathbf{y}}\mid \boldsymbol{\mathbf{\beta}}) + \ln p(\boldsymbol{\mathbf{\beta}}) = - \sum_{i=1}^n \ln\left(1 + \exp\left((1 - 2 y_i) \boldsymbol{\mathbf{x}}_i^\top \boldsymbol{\mathbf{\beta}}\right)\right) - \frac{1}{2\sigma^2_{\boldsymbol{\mathbf{\beta}}}}\boldsymbol{\mathbf{\beta}}^\top \boldsymbol{\mathbf{\beta}} \] The gradient of the log posterior is given by \[ \nabla_{\boldsymbol{\mathbf{\beta}}} \ln p(\boldsymbol{\mathbf{\beta}}\mid \boldsymbol{\mathbf{y}}) = X^\top (\boldsymbol{\mathbf{y}}- \sigma(X\boldsymbol{\mathbf{\beta}})) - \frac{1}{\sigma^2_{\boldsymbol{\mathbf{\beta}}}}\boldsymbol{\mathbf{\beta}} \] Thus gradient descent with regularization to do MAP becomes \[ \boldsymbol{\mathbf{\beta}}_{k+1}\leftarrow \boldsymbol{\mathbf{\beta}}_k + \gamma \left(\sigma^2_{\boldsymbol{\mathbf{\beta}}}X^\top (\boldsymbol{\mathbf{y}}- \sigma(X\boldsymbol{\mathbf{\beta}}_k)) - \boldsymbol{\mathbf{\beta}}_k\right) \]
Newton’s Method (MAP, Ridge Regularization)
Similarly, we want to maximize \(\ln p(\boldsymbol{\mathbf{\beta}}\mid \boldsymbol{\mathbf{y}})\). The Hessian is given by \[ \nabla_{\boldsymbol{\mathbf{\beta}}}^2 \ln p(\boldsymbol{\mathbf{\beta}}\mid \boldsymbol{\mathbf{y}}) = - X^\top D X - \frac{1}{\sigma^2_{\boldsymbol{\mathbf{\beta}}}}I \] therefore the Newton’s Method update formula becomes \[ \boldsymbol{\mathbf{\beta}}_{k+1} \leftarrow \boldsymbol{\mathbf{\beta}}_k + \alpha \left[\sigma^2_{\boldsymbol{\mathbf{\beta}}} X^\top D X + I\right]^{-1}\left(\sigma^2_{\boldsymbol{\mathbf{\beta}}} X^\top (\boldsymbol{\mathbf{y}}- \sigma(X\boldsymbol{\mathbf{\beta}}_k)) - \boldsymbol{\mathbf{\beta}}_k\right) \]
Iteratively Reweighted Least-Squares
We can manipulate the expression in Newton’s method by defining a new variable \[ \boldsymbol{\mathbf{z}}_k = X\boldsymbol{\mathbf{\beta}}_k + D^{-1}(\boldsymbol{\mathbf{y}}- \sigma(X\boldsymbol{\mathbf{\beta}}_k)) \] Then the updates take the form \[ \boldsymbol{\mathbf{\beta}}_{k+1}\leftarrow \boldsymbol{\mathbf{\beta}}_k + (X^\top D_k X)^{-1} X^\top D_k \boldsymbol{\mathbf{z}}_k \] In practice we would follow these steps
- Evaluate \(\boldsymbol{\mathbf{\eta}}_k = X\boldsymbol{\mathbf{\beta}}_k\) and \(D_k\).
- Solve the system \(D_k \boldsymbol{\mathbf{r}}_k = \boldsymbol{\mathbf{y}}- \sigma(\boldsymbol{\mathbf{\eta}}_k)\) for \(\boldsymbol{\mathbf{r}}_k\).
- Compute \(\boldsymbol{\mathbf{z}}_k = \boldsymbol{\mathbf{\eta}}_k + \boldsymbol{\mathbf{r}}_k\).
- Solve the system \((X^\top D_k X)\boldsymbol{\mathbf{d}}_k = X^\top D_k \boldsymbol{\mathbf{z}}_k\) for \(\boldsymbol{\mathbf{d}}_k\).
- Compute \(\boldsymbol{\mathbf{\beta}}_{k+1} \leftarrow \boldsymbol{\mathbf{\beta}}_k + \boldsymbol{\mathbf{d}}_k\)
Alternatively, noticing that \(D_{ii}=\sigma(\boldsymbol{\mathbf{x}}_i^\top\boldsymbol{\mathbf{\beta}})(1 - \sigma(\boldsymbol{\mathbf{x}}_i^\top\boldsymbol{\mathbf{\beta}})) > 0\) one can take the square root of its elements and rewriting the problem as \[ (D^{1/2}_kX)^\top (D^{1/2}_k X) \boldsymbol{\mathbf{d}}_k = (D^{1/2}_k X)^\top (D^{1/2}_k \boldsymbol{\mathbf{z}}_k) \] which is a simple least squares problem on the new variables \(\widetilde{X}_k = D^{1/2}_kX\) and \(\widetilde{\boldsymbol{\mathbf{z}}}_k=D_k^{1/2}\boldsymbol{\mathbf{z}}_k\), and can be solve by using the QR decomposition of \(\widetilde{X} = QR\) by solving the following system for \(\boldsymbol{\mathbf{d}}_k\) \[ R \boldsymbol{\mathbf{d}}_k = Q^\top \widetilde{\boldsymbol{\mathbf{z}}}_k \]
Logistic Regression \(\{-1, 1\}\)
Notice that in the previous chapter we used \(y_i\in \{0, 1\}\) with \[ \mathbb{P}(Y_i=y_i \mid \boldsymbol{\mathbf{x}}_i) = \sigma(\boldsymbol{\mathbf{x}}_i^\top \boldsymbol{\mathbf{\beta}})^{y_i}\sigma(-\boldsymbol{\mathbf{x}}_i^\top \boldsymbol{\mathbf{\beta}})^{1-y_i} = \begin{cases} \sigma(-\boldsymbol{\mathbf{x}}_i^\top \boldsymbol{\mathbf{\beta}}) && \text{if } y_i =0\\ \sigma(\boldsymbol{\mathbf{x}}_i^\top \boldsymbol{\mathbf{\beta}}) && \text{if } y_i = 1 \end{cases} \] In particular \[ p(y_i = 1) = \sigma(\boldsymbol{\mathbf{x}}_i^\top \boldsymbol{\mathbf{\beta}}) \] This gave us the loss function \[ \sum_{i=1}^n\ln\left(1 + \exp((1 - 2y_i)\boldsymbol{\mathbf{x}}_i^\top\boldsymbol{\mathbf{\beta}})\right) \] Now the key point is to notice that \[ 1 - 2 y_i = \begin{cases} 1 && \text{if } y_i = 0\\ -1 && \text{if } y_i = 1 \end{cases} \] So that actually the mapping that makes \(\{0, 1\}\)-Logistic Regression and \(\{-1, 1\}\)-Logistic Regression equivalent is \(0 \mapsto 1\) and \(1 \mapsto -1\).
Now instead we have \[ p(z_i = -1) = \sigma(\boldsymbol{\mathbf{x}}_i^\top \boldsymbol{\mathbf{\beta}}) \]