diff --git a/Lectures_my/NumMet/Lecture6/mchrzasz.synctex.gz b/Lectures_my/NumMet/Lecture6/mchrzasz.synctex.gz index d501b18..4b02f3b 100644 --- a/Lectures_my/NumMet/Lecture6/mchrzasz.synctex.gz +++ b/Lectures_my/NumMet/Lecture6/mchrzasz.synctex.gz Binary files differ diff --git a/Lectures_my/NumMet/Lecture6/mchrzasz.tex b/Lectures_my/NumMet/Lecture6/mchrzasz.tex index 90d8dcd..4b958d9 100644 --- a/Lectures_my/NumMet/Lecture6/mchrzasz.tex +++ b/Lectures_my/NumMet/Lecture6/mchrzasz.tex @@ -206,7 +206,7 @@ \author{ {\fontspec{Trebuchet MS}Marcin Chrz\k{a}szcz, Danny van Dyk} (UZH)} \institute{UZH} -\title[Linear equation systems: exact methods]{Linear equation systems:exact methods} +\title[Linear equation systems: iteration methods]{Linear equation systems:iteration methods} \date{21 September 2016} @@ -220,7 +220,7 @@ \begin{center} \begin{columns} \begin{column}{0.9\textwidth} - \flushright\fontspec{Trebuchet MS}\bfseries \Huge {Linear equation systems: exact methods} + \flushright\fontspec{Trebuchet MS}\bfseries \Huge {Linear equation systems: iteration methods} \end{column} \begin{column}{0.2\textwidth} %\includegraphics[width=\textwidth]{SHiP-2} @@ -242,7 +242,7 @@ \vspace{1em} % \footnotesize\textcolor{gray}{With N. Serra, B. Storaci\\Thanks to the theory support from M. Shaposhnikov, D. Gorbunov}\normalsize\\ \vspace{0.5em} - \textcolor{normal text.fg!50!Comment}{Numerical Methods, \\ 10 October, 2016} + \textcolor{normal text.fg!50!Comment}{Numerical Methods, \\ 17 October, 2016} \end{center} \end{frame} } @@ -259,23 +259,17 @@ \ARROW The problem: Find the $x$ vector. - \end{frame} -\begin{frame}\frametitle{Error digression} + + +\begin{frame}\frametitle{LU method with main element selection} \begin{small} -\ARROW There is enormous amount of ways to solve the linear equation system.\\ -\ARROW The choice of one over the other of them should be gathered by the {\it condition} of the matrix $A$ denoted at $cond(A)$. -\ARROW If the $cond(A)$ is small we say that the problem is well conditioned, otherwise we say it's ill conditioned.\\ -\ARROW The {\it condition} relation is defined as: -\begin{align*} -cond(A) = \Vert A \Vert \cdot \Vert A^{-1} \Vert -\end{align*} -\ARROW Now there are many definitions of different norms... The most popular one (so-called ''column norm''): -\begin{align*} -\Vert A \vert_1 = \max_{1 \leq j \leq n} \sum_{i=1}^n \vert a_{i,j} \vert, -\end{align*} -where $n$ -is the dimension of $A$, $i,j$ are columns and rows numbers. +\ARROW The algorithm for the LU matrix decomposition from previous lecture doesn't have the main element selection. \\ +\ARROW This might be problematic. For example: + +\fixme + @@ -284,6 +278,297 @@ +\begin{frame}\frametitle{Chelosky decomposition} +\begin{small} + +\ARROW If $A \in \textbf{R}^{N \times N}$ is a symmetric ($A^T=A$) and positively defined matrix, then there exits a special case of LU factorization such that: +\begin{align*} +A =C C^T +\end{align*} +where $C$ is a traingular matrixwith diagonal elements greater then zero. \\ +\ARROW Finding the Chelosky decomposition is two times faster the finding the LU decomposition. \\ +\ARROW The Chelosky decomposition has the same algorithm then the LU decomposition. + + + +\end{small} +\end{frame} + + +\begin{frame}\frametitle{LDL factorization} +\begin{small} + +\ARROW If matrix can be Chelosky decomposed it can also be decomposed to: +\begin{align*} +A=LDL^T +\end{align*} +where $L$ is botton triangular matrix such that $\forall_i : l_{ii}=1$ and $D$ is diagonal matrix with positive elements. \\ +\ARROW The advantage of the LDL decomposition compared to Cehlosky decomposition is the fact that we don't need to square root in the calculations. + + + +\end{small} +\end{frame} + + + +\begin{frame}\frametitle{Iterative methods} +\begin{small} + +\ARROW The exact methods are the ones that require more computations to get the solutions.\\ +\ARROWR Because of this they are not really suited to solve big linear systems.\\ +\ARROW The iteration methods are simple and the main goal of them is to transform: +\begin{align*} +Ax=b +\end{align*} +to: +\begin{align*} +x= Mx + c +\end{align*} +where $A$,$M$ are matrices of $n \times n$ size. \\ +$b$ and $c$ are vectors of the size $n$.\\ +\ARROW Once we get the second system (btw. remember MC lectures?) we can use iterative methods to solve them. + +\end{small} +\end{frame} + + +\begin{frame}\frametitle{Expansion} +\begin{small} + +\ARROW If $\overline{x}$ is the solution of the $A x = b$ system then also: +\begin{align*} +\overline{x} = M \overline{x} +c +\end{align*} +\ARROW We construct the a sequence that approximates the $\overline{x}$ in the following way: +\begin{align*} +x^{(k+1)} = M x^{(k)},~~~k=0,1,... +\end{align*} +\ARROW The necessary and sufficient requirement for the convergence of the set is: +\begin{align*} +\rho(M)<1 +\end{align*} + + + +\end{small} +\end{frame} + + +\begin{frame}\frametitle{Jakobi method} +\begin{small} + +\ARROW The simplest method is the Jakobi method. \\ +\ARROW We start from decomposition of $A$ matrix: +\begin{align*} +A = L + U +D +\end{align*} +where +\begin{itemize} +\item $L=(a_{ij})_{i>j}$ - triangular lower matrix. +\item $D=(a_{ik})_{i=j}$ - diagonal matrix. +\item $U=(a_{ij})_{ii}a_{ij}x_{j}^{(k)} \right) + \frac{b_i}{a_{ii}} +\end{align*} +\ARROW The matrix: +\begin{align*} +M_{GS} = -(L+D)^{-1} U +\end{align*} +is so-called Gauss-Seidl matrix. + +\end{small} +\end{frame} + + +\begin{frame}\frametitle{Convergence of Gauss-Seidle and Jacobi methods} +\begin{small} + +\begin{exampleblock}{Reminder:} +The necessary and sufficient condition for the iteration to be convergence is that: +\begin{align*} +\rho(M)<1 +\end{align*} +where $M$ is the matrix of a given method. +\end{exampleblock} + +\ARROW Now calculating the $\rho(M)$ might be already a computationally expensive...\\ +\ARROW A very useful way of getting the $\rho(M_J)$ is: +\begin{align*} +r_J = \frac{\Vert x_{k+1} - x_k \Vert}{\Vert x_k - x_{k-1} \Vert} \approx \vert \rho(M_J) -1 \vert +\end{align*} +\ARROW Another useful equations: +\begin{align*} +\rho(M_{GS}) = \rho(M_J)^2 +\end{align*} + + + +\end{small} +\end{frame} + + +\begin{frame}\frametitle{Convergence of Gauss-Seidle and Jacobi methods} +\begin{small} + +\begin{alertblock}{Theorem:} +If matrix $(a_{ij})$ fulfils one of the conditions:\\ +\ARROW $\vert a_{ii} > \sum_{j \neq i} \vert a_{ij} \vert~~ i=1,...,n$, so-called strong row criteria.\\ +\ARROW $\vert a_{jj} > \sum_{i \neq j} \vert a_{ij} \vert~~ j=1,...,n$, so-called strong column criteria. +\end{alertblock} + +\begin{exampleblock}{Practical note:} +\ARROWR One needs to note that calculating the $\rho(M)$ might be a as time consuming calculation as the solution is self...\\ +\ARROWR If that is the case usually one neglects this steps and computes the solution with extra care at each step checking the convergence. + +\end{exampleblock} + + +\end{small} +\end{frame} + + + + +\begin{frame}\frametitle{SOR method} +\begin{small} + +\ARROW The Successive Over-Relaxation method is an extension of the Gauss-Seidl methods. \\ +\ARROW The modification is that we can reuse the previous step to get a more precise approximation of the solution. \\ +\ARROW The ''hack'' happens when we calculate the $x^{(k+1)}_{GS}$ using the Gauss-Seidl method. Instead assuming that the next step is the $x^{(k+1)}_{GS}$ we ''relax'' the condition using liner combination: +\begin{align*} +x^{(k+1)} =\omega x^{(k+1)}_{GS} + (1-\omega) x^{(k)} +\end{align*} +\ARROW From the above once can see that if $\omega=1$ then we end up with standard Gauss-Seidl method. \\ +\ARROW The iteration method equation has the form: +\begin{align*} +x^{(k+1)} = (D+ \omega L)^{-1} ((1-\omega)D - \omega U)x^{(k)} + (D + \omega L)^{-1} \omega b +\end{align*} + + + + +\end{small} +\end{frame} + + + + +\begin{frame}\frametitle{SOR method, convergence} +\begin{small} + +\ARROW The main difficulty in the SOR method is the fact we need to choose the $\omega$ parameter. \\ +\ARROW One can easily prove that if the method is converging the $\omega \in \left(0, 2 \right)$. \\ +\ARROW This is necessary condition but it's not enough! \\ +\ARROW If the $A$ is symmetric and positively defined then the above condition is also sufficient! + + + + +\end{small} +\end{frame} + + + + + +\begin{frame}\frametitle{Conclusions} +\begin{small} + +\ARROW We learned more exact method of solving the linear system.\\ +\ARROW For big matrix we need to use iterative method which are not exact.\\ +\ARROW My personal experience: Did not have big enough matrices to really need to apply the iterative methods but this is field dependent! + + +\end{small} +\end{frame} +