Skip to content

Commit

Permalink
fixed typo in gradient of cost function
Browse files Browse the repository at this point in the history
  • Loading branch information
spors committed Dec 9, 2024
1 parent 78e8c60 commit 3317084
Showing 1 changed file with 3 additions and 3 deletions.
6 changes: 3 additions & 3 deletions random_signals_LTI_systems/linear_prediction.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -66,7 +66,7 @@
"Above equation is referred to as [*cost function*](https://en.wikipedia.org/wiki/Loss_function) $J$ of the optimization problem. We aim at minimizing the cost function, hence minimizing the MSE between the signal $x[k]$ and its prediction $\\hat{x}[k]$. The solution of this [convex optimization](https://en.wikipedia.org/wiki/Convex_optimization) problem is referred to as [minimum mean squared error](https://en.wikipedia.org/wiki/Minimum_mean_square_error) (MMSE) solution. Minimizing the cost function is achieved by calculating its gradient with respect to the filter coefficients [[Haykin](../index.ipynb#Literature)] using results from [matrix calculus](https://en.wikipedia.org/wiki/Matrix_calculus)\n",
"\n",
"\\begin{align}\n",
"\\nabla_\\mathbf{h} J &= -2 E \\left\\{ x[k-1] (x[k] - \\mathbf{h}^T[k] \\mathbf{x}[k-1]) \\right\\} \\\\\n",
"\\nabla_\\mathbf{h} J &= -2 E \\left\\{ \\mathbf{x}[k-1] (x[k] - \\mathbf{h}^T[k] \\mathbf{x}[k-1]) \\right\\} \\\\\n",
"&= - 2 \\mathbf{r}[k] + 2 \\mathbf{R}[k-1] \\mathbf{h}[k]\n",
"\\end{align}\n",
"\n",
Expand Down Expand Up @@ -10436,9 +10436,9 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.13"
"version": "3.12.6"
}
},
"nbformat": 4,
"nbformat_minor": 1
"nbformat_minor": 4
}

0 comments on commit 3317084

Please sign in to comment.