diff --git a/Lectures_my/EMPP/Lecture1/images/dupa.png b/Lectures_my/EMPP/Lecture1/images/dupa.png new file mode 100644 index 0000000..70db5f7 --- /dev/null +++ b/Lectures_my/EMPP/Lecture1/images/dupa.png Binary files differ diff --git a/Lectures_my/EMPP/Lecture1/images/result_weight.png b/Lectures_my/EMPP/Lecture1/images/result_weight.png new file mode 100644 index 0000000..afda7e1 --- /dev/null +++ b/Lectures_my/EMPP/Lecture1/images/result_weight.png Binary files differ diff --git a/Lectures_my/EMPP/Lecture1/mchrzasz.tex b/Lectures_my/EMPP/Lecture1/mchrzasz.tex index f0dacdd..d3fc7eb 100644 --- a/Lectures_my/EMPP/Lecture1/mchrzasz.tex +++ b/Lectures_my/EMPP/Lecture1/mchrzasz.tex @@ -393,7 +393,7 @@ $\longmapsto$ Let's use the {\color{BrickRed}{statistical}} world and estimate the uncertainty of an integral in this case :)\\ $\rightarrowtail$ A variance of a MC integral: \begin{align*} -V(\hat{I}) = \dfrac{1}{n} = \dfrac{1}{n} \Big\lbrace E(f^2) - E^2(f) \Big\rbrace = \dfrac{1}{n} \Big\lbrace \dfrac{1}{b-a} \int_a^b f^2(x)dx - I^2 \Big\rbrace +V(\hat{I}) = \dfrac{1}{n} \Big\lbrace E(f^2) - E^2(f) \Big\rbrace = \dfrac{1}{n} \Big\lbrace \dfrac{1}{b-a} \int_a^b f^2(x)dx - I^2 \Big\rbrace \end{align*} \begin{alertblock}{} $\looparrowright$ To calculate $V(\hat{I})$ one needs to know the value of $I$! @@ -521,18 +521,264 @@ \end{footnotesize} \end{frame} +\begin{frame}\frametitle{Large Number Theorem} +\begin{footnotesize} + +\end{footnotesize} +\end{frame} + + +\begin{frame}\frametitle{Crude Monte Carlo method of integration} +\begin{footnotesize} +$\Rrightarrow$ {\color{MidnightBlue}{Crude Monte Carlo method of integration is based on Large Number Theorem (LNT): }}\\ +\begin{align*} +\dfrac{1}{N} \sum_{i=1}^N f(x_i) \xrightarrow{N\to \infty} \dfrac{1}{b-a}\int_a^b f(x)dx =E(f) +\end{align*} +$\Rrightarrow$ The standard deviation can be calculated: +\begin{align*} +\sigma = \dfrac{1}{\sqrt{N}} \sqrt{\Big[ E(f^2) -E^2(f)\Big] } +\end{align*} + +$\Rrightarrow$ From LNT we have: +\begin{align*} +P= \int w(x) \rho(x) dx = \int_0^{\pi/2} (\frac{l}{L} \cos x ) \frac{2}{\pi} dx= \dfrac{2l}{\pi L} \xrightarrow{N\to \infty} \frac{1}{N}\sum_{i=1}^N w(x_i) +\end{align*} +$\Rrightarrow$ Important comparison between ''Hit and mishit'' and Crude \mc~methods. One can analytically calculate: +\begin{columns} + +\column{2.5in} +\begin{align*} +\hat{\sigma}^{{\rm{Crude}}} \simeq \dfrac{1.57}{\sqrt{N}} +\end{align*} + +\column{2.5in} +\begin{align*} +\hat{\sigma}^{{\rm{Hit~and~mishit}}} \simeq \dfrac{2.37}{\sqrt{N}} +\end{align*} + + +\end{columns} + +$\Rrightarrow$ Crude \mc~is \textbf{always} better then ''Hit and mishit'' method. + + +\end{footnotesize} +\end{frame} + +%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% + + + +\begin{frame}\frametitle{Classical methods of variance reduction} +\begin{footnotesize} + +$\Rrightarrow$ In Monte Carlo methods the statistical uncertainty is defined as: +\begin{align*} +\sigma = \dfrac{1}{\sqrt{N}}\sqrt{V(f)} +\end{align*} +$\Rrightarrow$ Obvious conclusion: +\begin{itemize} +\item To reduce the uncertainty one needs to increase $N$.\\ +$\rightrightarrows$ Slow convergence. In order to reduce the error by factor of 10 one needs to simulate factor of 100 more points! +\end{itemize} +$\Rrightarrow$ How ever the other handle ($V(f)$) can be changed! $\longrightarrow$ Lot's of theoretical effort goes into reducing this factor.\\ +$\Rrightarrow$ We will discuss {\color{Mahogany}{four}} classical methods of variance reduction: +\begin{enumerate} +\item Stratified sampling. +\item Importance sampling. +\item Control variates. +\item Antithetic variates. +\end{enumerate} +\end{footnotesize} +\end{frame} +\begin{frame}\frametitle{Stratified sampling} +\begin{footnotesize} +$\Rrightarrow$ The most intuitive method of variance reduction. The idea behind it is to divide the function in different ranges and to use the Riemann integral property: +\begin{align*} +I = \int_0^1 f(u) du = \int_0^a f(u)du + \int_a^1 f(u) du,~ 0