diff --git a/Lectures_my/NumMet/2016/Lecture11/mchrzasz.tex b/Lectures_my/NumMet/2016/Lecture11/mchrzasz.tex index 90d8dcd..ceea9f9 100644 --- a/Lectures_my/NumMet/2016/Lecture11/mchrzasz.tex +++ b/Lectures_my/NumMet/2016/Lecture11/mchrzasz.tex @@ -220,7 +220,7 @@ \begin{center} \begin{columns} \begin{column}{0.9\textwidth} - \flushright\fontspec{Trebuchet MS}\bfseries \Huge {Linear equation systems: exact methods} + \flushright\fontspec{Trebuchet MS}\bfseries \Huge {Chaos in ODEs} \end{column} \begin{column}{0.2\textwidth} %\includegraphics[width=\textwidth]{SHiP-2} @@ -242,48 +242,41 @@ \vspace{1em} % \footnotesize\textcolor{gray}{With N. Serra, B. Storaci\\Thanks to the theory support from M. Shaposhnikov, D. Gorbunov}\normalsize\\ \vspace{0.5em} - \textcolor{normal text.fg!50!Comment}{Numerical Methods, \\ 10 October, 2016} + \textcolor{normal text.fg!50!Comment}{Numerical Methods, \\ 16 November, 2016} \end{center} \end{frame} } -\begin{frame}\frametitle{Linear eq. system} +\begin{frame}\frametitle{Classical mechanics formulation} -\ARROW This and the next lecture will focus on a well known problem. Solve the following equation system: +\ARROW This you should know by heart:\\{~}\\ +\begin{columns} +\column{0.33\textwidth} +\ARROW Newton: \begin{align*} -A \cdot x =b, -\end{align*} -\ARROWR $A = a_{ij} \in \mathbb{R}^{n\times n}$ and $\det(A) \neq 0$\\ -\ARROWR $b=b_i \in \mathbb{R}^n$.\\ -\ARROW The problem: Find the $x$ vector. +\overrightarrow{F} = \frac{d \overrightarrow{p}}{dt} +\end{align*} +\column{0.33\textwidth} +\ARROW Lagrange: +\begin{align*} +\mathcal{L}=T-V\\ +\frac{\partial \mathcal{L} }{ \partial x} - \frac{d}{dt} \frac{\partial \mathcal{L}}{\partial \dot{x}}=0 +\end{align*} +\column{0.33\textwidth} +\ARROW Hamilton: +\begin{align*} +\mathcal{H}=T+ V\\ +\frac{d p}{d t}=-\frac{\partial \mathcal{H}}{\partial x}\\ +\frac{d x }{d t}= \frac{\partial \mathcal{H}}{\partial p} +\end{align*} +\end{columns} +\ARROW Now what are advantages of each one? \end{frame} -\begin{frame}\frametitle{Error digression} -\begin{small} -\ARROW There is enormous amount of ways to solve the linear equation system.\\ -\ARROW The choice of one over the other of them should be gathered by the {\it condition} of the matrix $A$ denoted at $cond(A)$. -\ARROW If the $cond(A)$ is small we say that the problem is well conditioned, otherwise we say it's ill conditioned.\\ -\ARROW The {\it condition} relation is defined as: -\begin{align*} -cond(A) = \Vert A \Vert \cdot \Vert A^{-1} \Vert -\end{align*} -\ARROW Now there are many definitions of different norms... The most popular one (so-called ''column norm''): -\begin{align*} -\Vert A \vert_1 = \max_{1 \leq j \leq n} \sum_{i=1}^n \vert a_{i,j} \vert, -\end{align*} -where $n$ -is the dimension of $A$, $i,j$ are columns and rows numbers. - - - -\end{small} -\end{frame} - - -