Newer
Older
Lecture_repo / Lectures_my / MC_2016 / Lecture9 / mchrzasz.tex
  1. \documentclass[11 pt,xcolor={dvipsnames,svgnames,x11names,table}]{beamer}
  2.  
  3. \usepackage[english]{babel}
  4. \usepackage{polski}
  5. \usepackage[skins,theorems]{tcolorbox}
  6. \tcbset{highlight math style={enhanced,
  7. colframe=red,colback=white,arc=0pt,boxrule=1pt}}
  8.  
  9. \usetheme[
  10. bullet=circle, % Other option: square
  11. bigpagenumber, % circled page number on lower right
  12. topline=true, % colored bar at the top of the frame
  13. shadow=false, % Shading for beamer blocks
  14. watermark=BG_lower, % png file for the watermark
  15. ]{Flip}
  16.  
  17. %\logo{\kern+1.em\includegraphics[height=1cm]{SHiP-3_LightCharcoal}}
  18.  
  19. \usepackage[lf]{berenis}
  20. \usepackage[LY1]{fontenc}
  21. \usepackage[utf8]{inputenc}
  22.  
  23. \usepackage{emerald}
  24. \usefonttheme{professionalfonts}
  25. \usepackage[no-math]{fontspec}
  26. \usepackage{listings}
  27. \defaultfontfeatures{Mapping=tex-text} % This seems to be important for mapping glyphs properly
  28.  
  29. \setmainfont{Gillius ADF} % Beamer ignores "main font" in favor of sans font
  30. \setsansfont{Gillius ADF} % This is the font that beamer will use by default
  31. % \setmainfont{Gill Sans Light} % Prettier, but harder to read
  32.  
  33. \setbeamerfont{title}{family=\fontspec{Gillius ADF}}
  34.  
  35. \input t1augie.fd
  36.  
  37. %\newcommand{\handwriting}{\fontspec{augie}} % From Emerald City, free font
  38. %\newcommand{\handwriting}{\usefont{T1}{fau}{m}{n}} % From Emerald City, free font
  39. % \newcommand{\handwriting}{} % If you prefer no special handwriting font or don't have augie
  40.  
  41. %% Gill Sans doesn't look very nice when boldfaced
  42. %% This is a hack to use Helvetica instead
  43. %% Usage: \textbf{\forbold some stuff}
  44. %\newcommand{\forbold}{\fontspec{Arial}}
  45.  
  46. \usepackage{graphicx}
  47. \usepackage[export]{adjustbox}
  48.  
  49. \usepackage{amsmath}
  50. \usepackage{amsfonts}
  51. \usepackage{amssymb}
  52. \usepackage{bm}
  53. \usepackage{colortbl}
  54. \usepackage{mathrsfs} % For Weinberg-esque letters
  55. \usepackage{cancel} % For "SUSY-breaking" symbol
  56. \usepackage{slashed} % for slashed characters in math mode
  57. \usepackage{bbm} % for \mathbbm{1} (unit matrix)
  58. \usepackage{amsthm} % For theorem environment
  59. \usepackage{multirow} % For multi row cells in table
  60. \usepackage{arydshln} % For dashed lines in arrays and tables
  61. \usepackage{siunitx}
  62. \usepackage{xhfill}
  63. \usepackage{grffile}
  64. \usepackage{textpos}
  65. \usepackage{subfigure}
  66. \usepackage{tikz}
  67. \usepackage{hyperref}
  68. %\usepackage{hepparticles}
  69. \usepackage[italic]{hepparticles}
  70.  
  71. \usepackage{hepnicenames}
  72.  
  73. % Drawing a line
  74. \tikzstyle{lw} = [line width=20pt]
  75. \newcommand{\topline}{%
  76. \tikz[remember picture,overlay] {%
  77. \draw[crimsonred] ([yshift=-23.5pt]current page.north west)
  78. -- ([yshift=-23.5pt,xshift=\paperwidth]current page.north west);}}
  79.  
  80.  
  81.  
  82. % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % %
  83. \usepackage{tikzfeynman} % For Feynman diagrams
  84. \usetikzlibrary{arrows,shapes}
  85. \usetikzlibrary{trees}
  86. \usetikzlibrary{matrix,arrows} % For commutative diagram
  87. % http://www.felixl.de/commu.pdf
  88. \usetikzlibrary{positioning} % For "above of=" commands
  89. \usetikzlibrary{calc,through} % For coordinates
  90. \usetikzlibrary{decorations.pathreplacing} % For curly braces
  91. % http://www.math.ucla.edu/~getreuer/tikz.html
  92. \usepackage{pgffor} % For repeating patterns
  93.  
  94. \usetikzlibrary{decorations.pathmorphing} % For Feynman Diagrams
  95. \usetikzlibrary{decorations.markings}
  96. \tikzset{
  97. % >=stealth', %% Uncomment for more conventional arrows
  98. vector/.style={decorate, decoration={snake}, draw},
  99. provector/.style={decorate, decoration={snake,amplitude=2.5pt}, draw},
  100. antivector/.style={decorate, decoration={snake,amplitude=-2.5pt}, draw},
  101. fermion/.style={draw=gray, postaction={decorate},
  102. decoration={markings,mark=at position .55 with {\arrow[draw=gray]{>}}}},
  103. fermionbar/.style={draw=gray, postaction={decorate},
  104. decoration={markings,mark=at position .55 with {\arrow[draw=gray]{<}}}},
  105. fermionnoarrow/.style={draw=gray},
  106. gluon/.style={decorate, draw=black,
  107. decoration={coil,amplitude=4pt, segment length=5pt}},
  108. scalar/.style={dashed,draw=black, postaction={decorate},
  109. decoration={markings,mark=at position .55 with {\arrow[draw=black]{>}}}},
  110. scalarbar/.style={dashed,draw=black, postaction={decorate},
  111. decoration={markings,mark=at position .55 with {\arrow[draw=black]{<}}}},
  112. scalarnoarrow/.style={dashed,draw=black},
  113. electron/.style={draw=black, postaction={decorate},
  114. decoration={markings,mark=at position .55 with {\arrow[draw=black]{>}}}},
  115. bigvector/.style={decorate, decoration={snake,amplitude=4pt}, draw},
  116. }
  117.  
  118. % TIKZ - for block diagrams,
  119. % from http://www.texample.net/tikz/examples/control-system-principles/
  120. % \usetikzlibrary{shapes,arrows}
  121. \tikzstyle{block} = [draw, rectangle,
  122. minimum height=3em, minimum width=6em]
  123.  
  124.  
  125.  
  126.  
  127. \usetikzlibrary{backgrounds}
  128. \usetikzlibrary{mindmap,trees} % For mind map
  129. \newcommand{\degree}{\ensuremath{^\circ}}
  130. \newcommand{\E}{\mathrm{E}}
  131. \newcommand{\Var}{\mathrm{Var}}
  132. \newcommand{\Cov}{\mathrm{Cov}}
  133. \newcommand\Ts{\rule{0pt}{2.6ex}} % Top strut
  134. \newcommand\Bs{\rule[-1.2ex]{0pt}{0pt}} % Bottom strut
  135.  
  136. \graphicspath{{images/}} % Put all images in this directory. Avoids clutter.
  137.  
  138. % SOME COMMANDS THAT I FIND HANDY
  139. % \renewcommand{\tilde}{\widetilde} % dinky tildes look silly, dosn't work with fontspec
  140. %\newcommand{\comment}[1]{\textcolor{comment}{\footnotesize{#1}\normalsize}} % comment mild
  141. %\newcommand{\Comment}[1]{\textcolor{Comment}{\footnotesize{#1}\normalsize}} % comment bold
  142. %\newcommand{\COMMENT}[1]{\textcolor{COMMENT}{\footnotesize{#1}\normalsize}} % comment crazy bold
  143. \newcommand{\Alert}[1]{\textcolor{Alert}{#1}} % louder alert
  144. \newcommand{\ALERT}[1]{\textcolor{ALERT}{#1}} % loudest alert
  145. %% "\alert" is already a beamer pre-defined
  146. \newcommand*{\Scale}[2][4]{\scalebox{#1}{$#2$}}%
  147.  
  148. \def\Put(#1,#2)#3{\leavevmode\makebox(0,0){\put(#1,#2){#3}}}
  149.  
  150. \usepackage{gmp}
  151. \usepackage[final]{feynmp-auto}
  152.  
  153. \usepackage[backend=bibtex,style=numeric-comp,firstinits=true]{biblatex}
  154. \bibliography{bib}
  155. \setbeamertemplate{bibliography item}[text]
  156.  
  157. \makeatletter\let\frametextheight\beamer@frametextheight\makeatother
  158.  
  159. % suppress frame numbering for backup slides
  160. % you always need the appendix for this!
  161. \newcommand{\backupbegin}{
  162. \newcounter{framenumberappendix}
  163. \setcounter{framenumberappendix}{\value{framenumber}}
  164. }
  165. \newcommand{\backupend}{
  166. \addtocounter{framenumberappendix}{-\value{framenumber}}
  167. \addtocounter{framenumber}{\value{framenumberappendix}}
  168. }
  169.  
  170.  
  171. \definecolor{links}{HTML}{2A1B81}
  172. %\hypersetup{colorlinks,linkcolor=,urlcolor=links}
  173.  
  174. % For shapo's formulas:
  175.  
  176. % For shapo's formulas:
  177. \def\lsi{\raise0.3ex\hbox{$<$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}}
  178. \def\gsi{\raise0.3ex\hbox{$>$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}}
  179. \newcommand{\lsim}{\mathop{\lsi}}
  180. \newcommand{\gsim}{\mathop{\gsi}}
  181. \newcommand{\wt}{\widetilde}
  182. %\newcommand{\ol}{\overline}
  183. \newcommand{\Tr}{\rm{Tr}}
  184. \newcommand{\tr}{\rm{tr}}
  185. \newcommand{\eqn}[1]{&\hspace{-0.7em}#1\hspace{-0.7em}&}
  186. \newcommand{\vev}[1]{\rm{$\langle #1 \rangle$}}
  187. \newcommand{\abs}[1]{\rm{$\left| #1 \right|$}}
  188. \newcommand{\eV}{\rm{eV}}
  189. \newcommand{\keV}{\rm{keV}}
  190. \newcommand{\GeV}{\rm{GeV}}
  191. \newcommand{\im}{\rm{Im}}
  192. \newcommand{\disp}{\displaystyle}
  193. \def\be{\begin{equation}}
  194. \def\ee{\end{equation}}
  195. \def\ba{\begin{eqnarray}}
  196. \def\ea{\end{eqnarray}}
  197. \def\d{\partial}
  198. \def\l{\left(}
  199. \def\r{\right)}
  200. \def\la{\langle}
  201. \def\ra{\rangle}
  202. \def\e{{\rm e}}
  203. \def\Br{{\rm Br}}
  204. \def\fixme{{\color{red} FIXME!}}
  205. \def\mc{{\color{Magenta}{MC}}}
  206. \def\pdf{{\rm p.d.f.}}
  207. \def\cdf{{\rm c.d.f.}}
  208. \def\ARROW{{\color{JungleGreen}{$\Rrightarrow$}}\xspace}
  209. \def\ARROWR{{\color{WildStrawberry}{$\Rrightarrow$}}\xspace}
  210.  
  211. \author{ {\fontspec{Trebuchet MS}Marcin Chrz\k{a}szcz} (Universit\"{a}t Z\"{u}rich)}
  212. \institute{UZH}
  213. \title[Specific \pdf~generation]{Specific \pdf~generation}
  214. \date{\fixme}
  215. \newcommand*{\QEDA}{\hfill\ensuremath{\blacksquare}}%
  216. \newcommand*{\QEDB}{\hfill\ensuremath{\square}}%
  217.  
  218. \author{ {\fontspec{Trebuchet MS}Marcin Chrz\k{a}szcz} (Universit\"{a}t Z\"{u}rich)}
  219. \institute{UZH}
  220. \title[Matrix inversion and Partial Differential Equation Solving]{Matrix inversion and Partial Differential Equation Solving}
  221. \date{\fixme}
  222.  
  223.  
  224. \begin{document}
  225. \tikzstyle{every picture}+=[remember picture]
  226.  
  227. {
  228. \setbeamertemplate{sidebar right}{\llap{\includegraphics[width=\paperwidth,height=\paperheight]{bubble2}}}
  229. \begin{frame}[c]%{\phantom{title page}}
  230. \begin{center}
  231. \begin{center}
  232. \begin{columns}
  233. \begin{column}{0.9\textwidth}
  234. \flushright\fontspec{Trebuchet MS}\bfseries \Huge {Matrix inversion and Partial Differential Equation Solving}
  235. \end{column}
  236. \begin{column}{0.2\textwidth}
  237. %\includegraphics[width=\textwidth]{SHiP-2}
  238. \end{column}
  239. \end{columns}
  240. \end{center}
  241. \quad
  242. \vspace{3em}
  243. \begin{columns}
  244. \begin{column}{0.44\textwidth}
  245. \flushright \vspace{-1.8em} {\fontspec{Trebuchet MS} \Large Marcin Chrząszcz\\\vspace{-0.1em}\small \href{mailto:mchrzasz@cern.ch}{mchrzasz@cern.ch}}
  246.  
  247. \end{column}
  248. \begin{column}{0.53\textwidth}
  249. \includegraphics[height=1.3cm]{uzh-transp}
  250. \end{column}
  251. \end{columns}
  252.  
  253. \vspace{1em}
  254. % \footnotesize\textcolor{gray}{With N. Serra, B. Storaci\\Thanks to the theory support from M. Shaposhnikov, D. Gorbunov}\normalsize\\
  255. \vspace{0.5em}
  256. \textcolor{normal text.fg!50!Comment}{Monte Carlo methods, \\ 28 April, 2016}
  257. \end{center}
  258. \end{frame}
  259. }
  260.  
  261. \begin{frame}\frametitle{Announcement}
  262.  
  263. \begin{Large}
  264. There will be no lectures and class on 19$^{th}$ of May
  265. \end{Large}
  266.  
  267. \end{frame}
  268.  
  269.  
  270. \begin{frame}\frametitle{Matrix inversion}
  271. \begin{minipage}{\textwidth}
  272. \begin{footnotesize}
  273.  
  274. \ARROW The last time we discussed the method of linear equations solving. The same methods can be used for matrix inversions! The columns of inverse matrix can be found solving:
  275. \begin{align*}
  276. \textbf{A}\overrightarrow{x}= \hat{e}_i,~~~i=1,2,...,n
  277. \end{align*}
  278. %where $\hat{e}_i$ is the $i^{th}$ versor. \\
  279. \ARROW In order to determine the inverse of a matrix $\textbf{A}$ we need to choose a temprorary matrix $\textbf{M}$ such that:
  280. \begin{align*}
  281. \textbf{H}=\textbf{I}-\textbf{M}\textbf{A}
  282. \end{align*}
  283. with the normalization condition:
  284. \begin{align*}
  285. \Vert \textbf{H} \Vert = \max_{1 \leq i \leq n} \sum_{j=1}^n \vert h_{ij} \vert < 1
  286. \end{align*}
  287. where $\textbf{I}$ is a unit matrix.\\
  288. \ARROW Next we Neumann expand the $(\textbf{MA})^{-1}$ matrix:
  289. \begin{align*}
  290. (\textbf{MA})^{-1}=(\textbf{I}-\textbf{H})^{-1}=\textbf{I}+\textbf{H}
  291. +\textbf{H}^2+....
  292. \end{align*}
  293. \ARROW The inverse matrix we get from the equation:
  294. \begin{align*}
  295. \textbf{A}^{-1}=\textbf{A}^{-1} \textbf{M}^{-1} \textbf{M}=(\textbf{MA})^{-1}\textbf{M}
  296. \end{align*}
  297.  
  298.  
  299. \end{footnotesize}
  300.  
  301. \end{minipage}
  302.  
  303. \end{frame}
  304. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
  305.  
  306. \begin{frame}\frametitle{Matrix inversion, basic method}
  307. \begin{minipage}{\textwidth}
  308. \begin{footnotesize}
  309. \ARROW For the $(i,j)$ element of the matrix $(MA)^{-1}$ we have:
  310. \begin{align*}
  311. (MA)^{-1}_{ij} = \delta_{ij} + h_{ij} + \sum_{i_1=1}^n h_{i i_1} h_{i_1 j} + \sum_{i_1=1}^n \sum_{i_2=1}^n h_{i i_1} h_{i_1 i_2}h_{i_2 j} + ...
  312. \end{align*}
  313. \ARROW The algorithm:
  314. We choose freely a probability matrix $P=(p_{ij})$ with the conditions:
  315. \begin{align*}
  316. p_{i,j}\geq 0,~~~~ p_{ij}=0 \Leftrightarrow h_{ij}=0,~~~~p_{i,0}=1-\sum_{j=1}^np_{ij} >0
  317. \end{align*}
  318. \ARROW We construct a random walk for the state set $\lbrace 0,1,2,3...,n \rbrace$:
  319. \begin{enumerate}
  320. \item In the initial moment $(t=0)$ we start in the state $i_0=i$.
  321. \item If in the moment $t$ the point is in the $i_t$ state, then in the time $t+1$ he will be in state $i_{t+1}$ with the probability $p_{i_t,t_{t+1}}$.
  322. \item We stop the walk if we end up in the state $0$.
  323. \end{enumerate}
  324. \end{footnotesize}
  325.  
  326. \end{minipage}
  327.  
  328. \end{frame}
  329.  
  330.  
  331. \begin{frame}\frametitle{Matrix inversion, basic method}
  332. \begin{minipage}{\textwidth}
  333. \begin{footnotesize}
  334. \ARROW For the observed trajectory $\gamma_k=(i,i_1,..,j_k,0)$ we assign the value of:
  335. \begin{align*}
  336. X(\gamma_k)=\frac{ h_{ii_1} h_{i_1 i_2}... h_{i_{k-1} i_k} ~~ \delta_{i_k j }}{ p_{ii_1} p_{i_1 i_2}... p_{i_{k-1} i_k} ~~p_{i_k 0} }
  337. \end{align*}
  338. \ARROW The mean is the of all observed $X(\gamma_k)$ is an unbiased estimator of the $(MA)^{-1}_{ij}$.\\
  339. \begin{exampleblock}{Prove:}
  340. \begin{itemize}
  341. \item The probability of observing the $\gamma_k$ trajectory:
  342. \begin{align*}
  343. P(\gamma_k) = p_{i i_1} p_{i_1 i_2}... p_{i_{k-1} i_k} p_{i_k 0}
  344. \end{align*}
  345. \item Form this point we follow the prove of the previous lecture (Neumann-Ulan) and prove that:
  346. \begin{align*}
  347. E \lbrace X(\gamma_k) \rbrace = (MA)^{-1}
  348. \end{align*}
  349.  
  350. \end{itemize}
  351. \end{exampleblock}
  352. \ARROW A different estimator for the $(MA)^{-1}_{ij}$ element is the Wasow estimator:
  353. \begin{align*}
  354. X^{\ast} (\gamma_k) = \sum_{m=0}^k \frac{ h_{ii_1} h_{i_1 i_2}... h_{i_{m-1} i_m} } { p_{ii_1} p_{i_1 i_2}... p_{i_{m-1} i_m} } \delta_{i_m j}
  355. \end{align*}
  356.  
  357.  
  358. \end{footnotesize}
  359.  
  360. \end{minipage}
  361.  
  362. \end{frame}
  363.  
  364.  
  365.  
  366. \begin{frame}\frametitle{Matrix inversion, dual method}
  367. \begin{minipage}{\textwidth}
  368. \begin{footnotesize}
  369. \ARROW On the set of states $\lbrace 0, 1, 2,...,n \rbrace$ we set a binned \pdf~
  370. \begin{align*}
  371. q_1,q_2,...,q_n~{ \rm such~that~}q_i>0,~i=1,2,3...n{\rm ~and~} \sum_{i=1}^n q_i =1.
  372. \end{align*}
  373. \ARROW Then choose arbitrary the probability matrix $P$ (usual restrictions apply):
  374. \begin{itemize}
  375. \item The initial point we choose with the probability ${q_i}$.
  376. \item If in the moment $t$ the point is in the $i_t$ state, then in the time $t+1$ he will be in state $i_{t+1}$ with the probability $p_{i_t,t_{t+1}}$.
  377. \item The walk ends when we reach $0$ state.
  378. \item For the trajectory we assign a matrix:
  379. \end{itemize}
  380. \begin{align*}
  381. Y(\gamma_k)=\frac{ h_{i_1 i} h_{i_2 i_1}... h_{i_k i_{k-1}} }{ p_{i_1 i} p_{i_2 i_1}... p_{i_k i_{k-1}} } \frac{1}{q_{i_0}p_{i_k 0} } e_{i_k i_0} \in \mathbb{R}^n \times\mathbb{R}^n
  382. \end{align*}
  383. \ARROW The mean of $Y(\gamma)$ is an unbiased estimator of the $(MA)^{-1}$ matrix.\\
  384. \ARROW The Wasow estimator reads:
  385. \begin{align*}
  386. Y^{\ast}=\sum_{m=0}^k \frac{ h_{i_1 i} h_{i_2 i_1}... h_{i_m i_{m-1}} }{ p_{i_1 i} p_{i_2 i_1}... p_{i_m i_{m-1}} } e_{i_m i_0} \in \mathbb{R}^n \times\mathbb{R}^n
  387. \end{align*}
  388. \end{footnotesize}
  389.  
  390. \end{minipage}
  391.  
  392. \end{frame}
  393.  
  394.  
  395.  
  396.  
  397. \begin{frame}\frametitle{Partial differential equations, intro}
  398. \begin{minipage}{\textwidth}
  399. \begin{footnotesize}
  400. \ARROW Let's say we are want to describe a point that walks on the $\mathbb{R}$ axis:
  401. \begin{itemize}
  402. \item At the beginning $(t=0)$ the particle is at $x=0$
  403. \item If in the $t$ the particle is in the $x$ then in the time $t+1$ it walks to $x+1$ with the known probability $p$ and to the point $x-1$ with the probability $q=1-p$.
  404. \item The moves are independent.
  405. \end{itemize}
  406. \ARROW So let's try to described the motion of the particle. \\
  407. \ARROW The solution is clearly a probabilistic problem. Let $\nu(x,t)$ be a probability that at time $t$ particle is in position $x$. We get the following equation:
  408. \begin{align*}
  409. \nu(x,t+1)=p \nu(x-1,t)+q \nu(x+1,t)
  410. \end{align*}
  411. with the initial conditions:
  412. \begin{align*}
  413. \nu(0,0)=1,~~~~~\nu(x,0)=0~~{\rm if~}x \neq 0.
  414. \end{align*}
  415. \ARROW The above functions describes the whole system (every $(t,x)$ point).
  416. \end{footnotesize}
  417.  
  418. \end{minipage}
  419.  
  420. \end{frame}
  421.  
  422.  
  423.  
  424. \begin{frame}\frametitle{Partial differential equations, intro}
  425. \begin{minipage}{\textwidth}
  426. \begin{tiny}
  427. \ARROW Now in differential equation language we would say that the particle walks in steps of $\Delta x$ in times: $k\Delta t$, $k=1,2,3....$:
  428. \begin{align*}
  429. \nu(x,t+\Delta t)=p\nu(x-\Delta x,t)+q\nu(x+\Delta x,t).
  430. \end{align*}
  431. \ARROW To solve this equation we need to expand the $\nu(x,t)$ funciton in the Taylor series:
  432. \begin{align*}
  433. \nu(x,t) + \frac{\partial \nu(x,t)}{\partial t} \Delta t = p \nu(x,t) - p \frac{\partial\nu(x,t) }{\partial x} \Delta x + \frac{1}{2} p \frac{\partial^2 \nu(x,t)}{\partial x^2} (\Delta x)^2\\ + q \nu(x,t) + q \frac{\partial\nu(x,t) }{\partial x} \Delta x + \frac{1}{2} q \frac{\partial^2 \nu(x,t)}{\partial x^2} (\Delta x)^2
  434. \end{align*}
  435. \ARROW From which we get:
  436. \begin{align*}
  437. \frac{\partial \nu(x,t)}{\partial t} \Delta t = -(p-q) \frac{\partial \nu(x,t) }{\partial x}\Delta x + \frac{1}{2} \frac{\partial^2 \nu(x,t) }{\partial x^2}(\Delta x)^2
  438. \end{align*}
  439. \ARROW Now We divide the equation by $\Delta t$ and take the $\Delta t \to 0$:
  440. \begin{align*}
  441. (p-q) \frac{\Delta x }{\Delta t} \to 2 c,~~~~~~\frac{ (\Delta x)^2}{\Delta t } \to 2D,
  442. \end{align*}
  443. \ARROW We get the Fokker-Planck equation for the diffusion with current:
  444. \begin{align*}
  445. \frac{\partial \nu(x,t)}{\partial t } = -2c \frac{\partial \nu(x,t) }{\partial x} + D \frac{\partial^2 \nu(x,t)}{\partial x^2}
  446. \end{align*}
  447. \ARROW The $D$ is the diffusion coefficient, $c$ is the speed of current. For $c=0$ it is a symmetric distribution.
  448.  
  449. \end{tiny}
  450.  
  451. \end{minipage}
  452.  
  453. \end{frame}
  454.  
  455.  
  456.  
  457.  
  458.  
  459.  
  460. \begin{frame}\frametitle{Laplace equation, Dirichlet boundary conditions}
  461. \begin{minipage}{\textwidth}
  462. \begin{footnotesize}
  463. \ARROW The aforementioned example shows the way to solve the partial differential equation using Markov Chain MC. \\
  464. \ARROW We will see how different classes of partial differential equations can be approximated with a Markov Chain MC, whose expectation value is the solution of the equation.
  465. \ARROW The Laplace equation:
  466. \begin{align*}
  467. \frac{\partial^2 u }{\partial x_1^2 } +\frac{\partial^2 u }{\partial x_2^2 }+...+\frac{\partial^2 u }{\partial x_k^2 }=0
  468. \end{align*}
  469. The $u(x_1,x_2,...,x_k)$ function that is a solution of above equation we call harmonic function. If one knows the values of the harmonic function on the edges $\Gamma(D)$ of the $D$ domain one can solve the equation.\\
  470. \begin{exampleblock}{The Dirichlet boundary conditions:}
  471. Find the values of $u(x_1,x_2,...,x_k)$ inside the $D$ domain knowing the values of the edge are given with a function:
  472. \begin{align*}
  473. u(x_1,x_2,...,x_k)=f(x_1,x_2,...,x_k) \in \Gamma(D)
  474. \end{align*}
  475. \end{exampleblock}
  476. \ARROW Now I am lazy so I put $k=2$ but it's the same for all k!
  477.  
  478. \end{footnotesize}
  479.  
  480. \end{minipage}
  481.  
  482. \end{frame}
  483.  
  484.  
  485.  
  486.  
  487. \begin{frame}\frametitle{Laplace equation, Dirichlet boundary conditions}
  488. \begin{minipage}{\textwidth}
  489. \begin{footnotesize}
  490. \begin{columns}
  491. \column{0.1in}
  492. {~}\\
  493. \column{3in}
  494. \ARROW We will put the Dirichlet boundary condition as a discrete condition:\\
  495. \begin{itemize}
  496. \item The domain $D$ we put a lattice with distance $h$.
  497. \item Some points we treat as inside {\color{green}(denoted with circles)}. Their form a set denoted $D^{\ast}$.
  498. \item The other points we consider as the boundary points and they form a set $\Gamma(D)$.
  499. \end{itemize}
  500.  
  501. \column{2in}
  502. \begin{center}
  503. \includegraphics[width=0.95\textwidth]{images/dir.png}
  504. \end{center}
  505.  
  506. \end{columns}
  507. \ARROW We express the second derivatives with the discrete form:
  508. \begin{align*}
  509. \frac{ \frac{u(x+h)-u(x)}{h} -\frac{u(x)-u(x-h) }{h} }{h} = \frac{u(x+h)-2u(x)+u(x-h)}{h^2}
  510. \end{align*}
  511. \ARROW Now we choose the units so $h=1$.
  512.  
  513. \end{footnotesize}
  514. \end{minipage}
  515. \end{frame}
  516.  
  517.  
  518. \begin{frame}\frametitle{Laplace equation, Dirichlet boundary conditions}
  519. \begin{minipage}{\textwidth}
  520. \begin{footnotesize}
  521. \begin{exampleblock}{The Dirichlet condition in the discrete form:}
  522. Find the $u^{\ast}$ function which obeys the differential equation:
  523. \begin{align*}
  524. U^{\ast}(x,y)=\frac{1}{4}\left[ u^{\ast}(x-1,y)+u^{\ast}(x+1,y)+u^{\ast}(x,y-1)+u^{\ast}(x,y+1) \right]
  525. \end{align*}
  526. in all points $(x,y) \in D^{\ast}$ with the condition:
  527. \begin{align*}
  528. u^{\ast}(x,y)=f^{\ast}(x,y),~~~(x,y) \in \Gamma(D^{\ast})
  529. \end{align*}
  530. where $f^{\ast}(x,y)$ is the discrete equivalent of $f(x,y)$ function.
  531. \end{exampleblock}
  532. \ARROW We consider a random walk over the lattice $D^{\ast} \cup \Gamma(D^{\ast})$.
  533. \begin{itemize}
  534. \item In the $t=0$ we are in some point $(\xi,\eta) \in D^{\ast})$
  535. \item If at the $t$ the particle is in $(x,y)$ then at $t+1$ it can go with equal probability to any of the four neighbour lattices: $(x-1,y)$, $(x+1,y)$, $(x,y-1)$, $(x,y+1)$.
  536. \item If the particle at some moment gets to the edge $\Gamma(D^{\ast}$ then the walk is terminated.
  537. \item For the particle trajectory we assign the value of: $\nu(\xi,\eta)=f^{\ast}(x,y)$, where $(x,y)\in \Gamma(D^{\ast})$.
  538. \end{itemize}
  539. \end{footnotesize}
  540. \end{minipage}
  541. \end{frame}
  542.  
  543.  
  544.  
  545.  
  546. \begin{frame}\frametitle{Laplace equation, Dirichlet boundary conditions}
  547. \begin{minipage}{\textwidth}
  548. \begin{footnotesize}
  549. \ARROW Let $p_{\xi,\eta}(x,y)$ be the probability of particle walk that starting in $(\xi,\eta)$ to end the walk in $(x,y)$.\\
  550. \ARROW The possibilities:
  551. \begin{enumerate}
  552. \item The point $(\xi,\eta) \in \Gamma(D^{\ast})$. Then:
  553. \begin{align}
  554. p_{\xi,\eta}(x,y)=\begin{cases}
  555. 1,~~(x,y)=\xi,\eta)\\
  556. 0,~~(x,y)\neq \xi,\eta)
  557. \end{cases}\label{eq:trivial}
  558. \end{align}
  559. \item The point $(\xi,\eta) \in D^{\ast}$:
  560. \begin{align}
  561. p_{\xi,\eta}(x,y) = \frac{1}{4}\left[ p_{\xi-1,\eta}(x,y) + p_{\xi+1,\eta}(x,y)+ p_{\xi,\eta-1}(x,y)+ p_{\xi,\eta+1}(x,y) \right]
  562. \label{eq:1}
  563. \end{align}
  564.  
  565.  
  566. \end{enumerate}
  567. this is because to get to $(x,y)$ the particle has to walk through one of the neighbours: $(x-1,y)$, $(x+1,y)$, $(x,y-1)$, $(x,y+1)$.\\
  568. \ARROW The expected value of the $\nu(\xi,\eta)$ is given by equation:
  569. \begin{align}
  570. E(\xi,\eta)=\sum_{(x,y)\in \Gamma^{\ast}} p_{\xi,\eta}(x,y) f^{\ast}(x,y)\label{eq:2}
  571. \end{align}
  572. where the summing is over all boundary points
  573. \end{footnotesize}
  574. \end{minipage}
  575. \end{frame}
  576.  
  577.  
  578.  
  579.  
  580.  
  581. \begin{frame}\frametitle{Laplace equation, Dirichlet boundary conditions}
  582. \begin{minipage}{\textwidth}
  583. \begin{footnotesize}
  584. \ARROW Now multiplying the \ref{eq:1} by $f^{\ast}(x,y)$ and summing over all edge points $(x,y)$:
  585. \begin{align*}
  586. E(\xi,\eta)=\frac{1}{4}\left[ E(\xi-1,\eta) + E(\xi+1,\eta) + E(\xi,\eta-1) + E(\xi,\eta+1) \right]
  587. \end{align*}
  588. \ARROW Putting now \ref{eq:trivial} to \ref{eq:2} one gets:
  589. \begin{align*}
  590. E(x,y)=f^{\ast}(x,y),~~(\xi,\eta) \in \Gamma(D^{\ast})
  591. \end{align*}
  592. \ARROW Now the expected value solves identical equation as our $u^{\ast}(x,y)$ function. From this we conclude:
  593. \begin{align*}
  594. E(x,y)=u^{\ast}(x,y)
  595. \end{align*}
  596. \ARROW The algorithm:
  597. \begin{itemize}
  598. \item We put a particle in $(x,y)$.
  599. \item We observe it's walk up to the moment when it's on the edge $\Gamma(D^{\ast})$.
  600. \item We calculate the value of $f^{\ast}$ function in the point where the particle stops.
  601. \item Repeat the walk $N$ times taking the average afterwards.
  602. \end{itemize}
  603. \begin{alertblock}{Important:}
  604. One can show the the error does not depend on the dimensions!
  605. \end{alertblock}
  606.  
  607. \end{footnotesize}
  608. \end{minipage}
  609. \end{frame}
  610.  
  611.  
  612.  
  613. \begin{frame}\frametitle{Example}
  614.  
  615. \begin{minipage}{\textwidth}
  616. \begin{footnotesize}
  617. Let function $u(x,y)$ be a solution of Laplace equation in the square: $0 \leq x,y \leq 4$ with the boundary conditions:
  618. \begin{align*}
  619. u(x,0)=0,~~~u(4,y)=y,~~~u(x,4)=x,~~~x(0,y)=0
  620. \end{align*}
  621. \ARROWR Find the $u(2,2)$!\\
  622. \ARROW The exact solution: $u(x,y)=xy/4$ so $u(2,2)=1$.
  623. \begin{columns}
  624. \column{0.1in}
  625. {~}\\
  626. \column{3in}
  627. \begin{itemize}
  628. \item We transform the continues problem to a discrete one with $h=1$.
  629. \item Perform a random walk starting from $(2,2)$ which ends on the edge assigning as a result the appropriative values of the edge conditions as an outcome.
  630. \end{itemize}
  631.  
  632. \column{2in}
  633. \begin{center}
  634. \includegraphics[width=0.95\textwidth]{images/problem1.png}
  635. \end{center}
  636.  
  637. \end{columns}
  638. \ARROW E9.1 Implement the above example and find $u(2,2)$.
  639.  
  640. \end{footnotesize}
  641. \end{minipage}
  642. \end{frame}
  643.  
  644.  
  645.  
  646. \begin{frame}\frametitle{Parabolic equation}
  647.  
  648. \begin{minipage}{\textwidth}
  649. \begin{footnotesize}
  650. \ARROW We are looking for a function $u(x_1,x_2,...,x_k,t)$, which inside the $D \subset \mathbb{R}^k$ obeys the parabolic equation:
  651. \begin{align*}
  652. \frac{\partial^2 u }{\partial x_1^2 } +\frac{\partial^2 u }{\partial x_2^2 }+...+\frac{\partial^2 u }{\partial x_k^2 }=c \frac{\partial u}{\partial t}
  653. \end{align*}
  654. with the boundary conditions:
  655. \begin{align*}
  656. u(x_1,x_2,...,x_k,t)=g(x_1,x_2,...,x_k,t),~~~(x_1,x_2,x_3,...,x_k)\in \Gamma(D)
  657. \end{align*}
  658. and with the initial conditions:
  659. \begin{align*}
  660. u(x_1,x_2,...,x_k,0)=h(x_1,x_2,...,x_k,t),~~~(x_1,x_2,x_3,...,x_k)\in D
  661. \end{align*}
  662. \ARROW In the general case the boundary conditions might have also the derivatives. \\
  663. \ARROW We will find the solution to the above problem using random walk starting from 1-dim case and then generalize it for n-dim.
  664. \end{footnotesize}
  665. \end{minipage}
  666. \end{frame}
  667.  
  668.  
  669. \begin{frame}\frametitle{Parabolic equation, 1-dim}
  670.  
  671. \begin{minipage}{\textwidth}
  672. \begin{footnotesize}
  673. \ARROW We are looking for a function $u(x,t)$, which satisfies the equation:
  674. \begin{align*}
  675. \frac{\partial^2 u }{\partial x^2 } = c \frac{\partial u}{\partial t}
  676. \end{align*}
  677. with the boundary conditions:
  678. \begin{align*}
  679. u(0,t)=f_1(t),~~u(a,t)=f_2(t)
  680. \end{align*}
  681. and with the initial conditions:
  682. \begin{align*}
  683. u(x,0)=g(x).
  684. \end{align*}
  685. \ARROW The above equation can be seen as describing the temperature of a line with time. We know the initial temperature in different points and we know that the temperature on the end points is know.\\
  686. \ARROW The above problem can be discreteized:
  687. \begin{align*}
  688. x=kh,~~h=\frac{a}{n},~k=1,2,...n~~~~~~~~t=jl,~j=0,1,2,3...,~l={\rm const}
  689. \end{align*}
  690. \ARROW The differential equation:
  691. \begin{align*}
  692. \frac{u(x+h,t-l) -2u(x,t-l)+u(x-h,t-l}{h^2})=c \frac{u(x,t)-u(x,t-l)}{l}
  693. \end{align*}
  694.  
  695.  
  696. \end{footnotesize}
  697. \end{minipage}
  698. \end{frame}
  699.  
  700.  
  701.  
  702.  
  703. \begin{frame}\frametitle{Parabolic equation, 1-dim}
  704.  
  705. \begin{minipage}{\textwidth}
  706. \begin{footnotesize}
  707. \ARROW The steps we choose such that: $c h^2 = 2l$.\\
  708. \ARROW Then we obtain the equation:\\
  709. \begin{align*}
  710. u(x,t)=\frac{1}{2}u(x+h,t-l)+\frac{1}{2}u(x-h,t-l)
  711. \end{align*}
  712. \ARROW The value of function $u$ in the point $x$ and $t$ can be evaluated with the arithmetic mean form points: $x+h$ and $x-h$ in the previous time step.
  713. \ARROW The algorithm estimating the function in the time $\tau$ and point $\xi$:
  714. \begin{itemize}
  715. \item The particle we put in the point $\xi$ and a ''weight'' equal $\tau$.
  716. \item If in a given time step $t$ particle is at $x$ then with $50:50$ chances it can go to $x-h$ or $x+h$ and time $t-l$.
  717. \item The particle ends the walk in two situations:
  718. \begin{itemize}
  719. \item If it reaches the $x=0$ or $x=a$. In this case we assign to a given trajectory a value of $f_1(t)$ or $f_2(t)$, where $t$ is the actuall ''weight''.
  720. \item If the ''weight'' of the particle is equal zero. in this case we assign as a value of the trajectory the $g(x)$, where $x$ is the actual position of the particle.
  721. \end{itemize}
  722. \end{itemize}
  723.  
  724. \end{footnotesize}
  725. \end{minipage}
  726. \end{frame}
  727.  
  728.  
  729.  
  730. \begin{frame}\frametitle{Parabolic equation, 1-dim}
  731.  
  732. \begin{minipage}{\textwidth}
  733. \begin{footnotesize}
  734. \ARROW Repeat the above procedure $N$ times. The expected value of a function $u$ in $(\xi,\tau)$ point is the mean of observed values.
  735.  
  736. \begin{exampleblock}{Digression:}
  737. The 1-dim calse can be treated as a 2-dim $(x,t)$, where the area is unbounded in the $t$ dimension. The walk is terminated after maximum $\tau/l$ steps.
  738.  
  739. \end{exampleblock}
  740. \begin{center}
  741. \includegraphics[width=0.6\textwidth]{images/par.png}
  742. \end{center}
  743.  
  744. \end{footnotesize}
  745. \end{minipage}
  746. \end{frame}
  747.  
  748.  
  749.  
  750.  
  751.  
  752. \begin{frame}\frametitle{Parabolic equation, 1-dim}
  753.  
  754. \begin{minipage}{\textwidth}
  755. \begin{footnotesize}
  756. \ARROW Repeat the above procedure $N$ times. The expected value of a function $u$ in $(\xi,\tau)$ point is the mean of observed values.
  757.  
  758. \begin{exampleblock}{Digresion:}
  759. The 1-dim calse can be treated as a 2-dim $(x,t)$, where the area is unbounded in the $t$ dimension. The walk is terminated after maximum $\tau/l$ steps.
  760.  
  761. \end{exampleblock}
  762. \begin{center}
  763. \includegraphics[width=0.6\textwidth]{images/par.png}
  764. \end{center}
  765.  
  766. \end{footnotesize}
  767. \end{minipage}
  768. \end{frame}
  769.  
  770.  
  771. \begin{frame}\frametitle{Parabolic equation, n-dim generalization}
  772.  
  773. \begin{minipage}{\textwidth}
  774. \begin{footnotesize}
  775. \ARROW We still choose the $k$ and $l$ values accordingly to:
  776. \begin{align*}
  777. \frac{ch^2}{l}=2k
  778. \end{align*}
  779. where $k$ is the number of space dimensions.\\
  780. \ARROW We get:
  781. \begin{align*}
  782. u(x_1,x_2,...,x_k)=\frac{1}{2k} \lbrace u(x_1+h,x_2,..,x_k,t-l) - u(x_1-h,x_2,..,x_k,t-l) \\ +...+u(x_1,x_2,..,x_k+h,t-l)+u(x_1,x_2,..,x_k-h,t-l) \rbrace
  783. \end{align*}
  784. \ARROW The k dimension problem we can solve in he same way as 1dim.\\
  785. \ARROW In each point we have $2k$ possibility to move(left-right) in each of the dimensions. The probability has to be $\frac{1}{2k}$.
  786.  
  787.  
  788. \end{footnotesize}
  789. \end{minipage}
  790. \end{frame}
  791.  
  792.  
  793.  
  794.  
  795. \backupbegin
  796.  
  797. \begin{frame}\frametitle{Backup}
  798.  
  799.  
  800. \end{frame}
  801.  
  802. \backupend
  803.  
  804. \end{document}