Newer
Older
Presentations / Kstmumu / Jakobian / mchrzasz.tex
@mchrzasz mchrzasz on 25 Jul 2015 7 KB added Fance slide style
\documentclass[xcolor=svgnames]{beamer}
\usepackage[utf8]{inputenc}
\usepackage[english]{babel}
\usepackage{polski}
%\usepackage{amssymb,amsmath}
%\usepackage[latin1]{inputenc}
%\usepackage{amsmath}
%\newcommand\abs[1]{\left|#1\right|}
\usepackage{amsmath}
\newcommand\abs[1]{\left|#1\right|}
\usepackage{hepnicenames}
\usepackage{hepunits}
\usepackage{color}



%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%5
\definecolor{mygreen}{cmyk}{0.82,0.11,1,0.25}


\usetheme{Sybila} 

\title[Jacobian for $\PBzero \to \PKstar \Pmuon \APmuon$]{Jacobian for $\PBzero \to \PKstar \Pmuon \APmuon$ \\ proposed solution}
\author{$\PBzero \to \PKstar \Pmuon \APmuon$ team}
%\institute{$^1$~University of Zurich}
\date{\today}

\begin{document}
% --------------------------- SLIDE --------------------------------------------
\frame[plain]{\titlepage}
\author{$\PBzero \to \PKstar \Pmuon \APmuon$ team}
%\institute{(UZH)}
% ------------------------------------------------------------------------------
% --------------------------- SLIDE --------------------------------------------
\begin{frame}\frametitle{Reminder}
\begin{small}

\begin{itemize}
\item We wanted to calculate the $P_i$ from $S_i$. 
\item Both Toy MC error propagation (generating toy experiments based on the covariance matrix) and bootstrapping the data set produces distribution that has a most probable value that is different to the central value in the data (see plot below, most probable value from toys is different then the generated one (red line)).
\item As discussed during the referee meeting we considered including the Jacobian the this picture.
\end{itemize}
\end{small}
\begin{center}
\includegraphics[width=0.45\textwidth]{images/P2.png}
\end{center}
\end{frame}


\begin{frame}\frametitle{Introduction}

\begin{itemize}
\item Lets write down explicit on what we all agree ( I hope at least ;) ).
\begin{itemize}
\item Measurement of $\overrightarrow{S}=(F_l,~S_x)$ is unbiased.
\item Error is also correctly estimated ensuring the correct coverage.
\end{itemize}
\item The questions what I am answering: what is the corresponding confidence and probability distribution in a new space: $\overrightarrow{P}=(F_l,~P_x)$.
\item To put it a bit more simple: I want to map one space on the other one. 
\item NB: This is a different question than what is the distribution of P measured by the experiments.
\end{itemize}

\end{frame}

\begin{frame}\frametitle{Some mathematical theorems assumptions 1}
\begin{itemize}
\item We have our standard transformation of ($\overrightarrow{S} \to \overrightarrow{P}$):
\begin{footnotesize}
\begin{align*}   
F_l      &\leftarrow  F_l\\                                                                     
P_1 &\leftarrow 2\frac{S_3}{1-F_{\rm L}}\\                                                          
P_2 &\leftarrow \frac{1}{2}\frac{S_6^s}{1-F_{\rm L}} = \frac{2}{3}\frac{A_{\rm FB}}{1-F_{\rm L}}\\  
P_3 &\leftarrow -\frac{S_9}{1-F_{\rm L}}\\                                                          
P_4^\prime &\leftarrow \frac{S_4}{\sqrt{F_{\rm L}(1-F_{\rm L})}}\\                                  
P_5^\prime &\leftarrow \frac{S_5}{\sqrt{F_{\rm L}(1-F_{\rm L})}}\\                                  
P_6^\prime &\leftarrow \frac{S_7}{\sqrt{F_{\rm L}(1-F_{\rm L})}}\\                                  
P_8^\prime &\leftarrow \frac{S_8}{\sqrt{F_{\rm L}(1-F_{\rm L})}}.                                   
\end{align*}                                                                               
\end{footnotesize}
\end{itemize}
\end{frame}

   
 \begin{frame}\frametitle{Some mathematical theorems assumptions 2}
\begin{itemize}
\item We know about this transformation:
\begin{itemize}
\item The parameter space is bounded domain ($D$) \checkmark
\item The angular PDF is smooth function in the domain \checkmark
\item There exists 1:1 transformation between $\overrightarrow{S}$ and $\overrightarrow{P}$ \checkmark
\item Inside the domain the Jacobian is non-zero. ($J \neq 0$) \checkmark
\end{itemize}
\item Next slide you will know why those assumptions are needed.
\end{itemize}
\end{frame}  
   
   
    
\begin{frame}\frametitle{Some mathematical theorems assumptions 3}
\begin{itemize}
\item Now since there is 1:1 correspondence the central point in the $\overrightarrow{P}$ should be derived from the central point of the $\overrightarrow{S}$ basis.
\item Now the confidence belt. In the $\overrightarrow{S}$  a $68\%$ confidence belt ($D$) is:
\begin{align*}
\int_D f(\overrightarrow{S}) d \overrightarrow{S} = 0.68
\end{align*}
\item In this equation our $D$ is effectively the errors that we quote.
\item Now form analysis thats to previous slide we can write :
\begin{align*}
\int_D \underbrace{f(\overrightarrow{S})}_{\rm{What~we~simulate/bootstrap}} d \overrightarrow{S} =  \int_{\Delta} \underbrace{f'(\overrightarrow{P})}_{\rm{What~we~get~in~P}}  \times \vert J \vert d\overrightarrow{P} 
\end{align*}

\end{itemize}
\end{frame}  
     
   \begin{frame}\frametitle{Toys}
\begin{itemize}
\item So to get the integral correct we need to take the Jacobian into account.
\item Let's make a toy example calculating $P_2$. Values used (Gaussian distributed: mean $\pm$ error): $F_l= 0.7679 \pm 0.2$, $A_{FB} = -0.329 \pm 0.13$. 
\item The Jacobian: $J=\dfrac{2}{3} \dfrac{1}{1-F_L}$
\item Generated $F_l$ and $A_{FB}$:
\end{itemize}

\includegraphics[width=0.85\textwidth]{images/Fl_AFb.png}


\end{frame}  
   
         
\begin{frame}\frametitle{Toys}
\begin{itemize}
\item Now how does the new space look like.
\item Important to take into account the boundary as without all my theorems fall down.
\item The white point is the value from which the toy was generated.
\end{itemize}
\begin{center}

\begin{columns}
\column{2.5in}
\begin{small}
Scatter plot $F_L:P_2$, no Jacobian
\end{small}

\column{0.5in}
{~}
\column{2.2in}

\begin{small}
Scatter plot $F_L:P_2$, with Jacobian
\end{small}

\end{columns}
\includegraphics[width=1.1\textwidth]{images/2DZ2.png}

\end{center}
\end{frame}   


 
\begin{frame}\frametitle{Re parametrization of pdf}
\begin{itemize}
\item Re parametrization of the pdf gives exactly the same answer as toys taking into account the jacobian:
\end{itemize}
{~}\\{~}\\

\begin{columns}
\column{2.5in}
\begin{small}
Profile likelihood from re-parametrised pdf.
\end{small}
\includegraphics[width=0.9\textwidth]{images/LL_pdf.png}
\column{2.5in}
\begin{small}
Profile likelihood from toys with Jacobian
\end{small}
\includegraphics[width=0.9\textwidth]{images/LL_toys.png}
\end{columns}



\end{frame}           

\begin{frame}\frametitle{Toys Conclusions}
\begin{itemize}
\item We understand the source of the bias in the most probable value.
\item Jacobian gives the same answer as does the parametrization of pdf.
\item When we work out the interval on P2 (etc), should we use this Jacobian weighting?
\item One should not look just at 1D projections as on them the most probable value is not the correct one:
\item Coverage of $P_i$ is ensured by the coverage of $S_i$.

\end{itemize}
\begin{columns}
\column{2.5in}
\includegraphics[width=0.6\textwidth]{images/P2.png}
\column{2.5in}
\includegraphics[width=0.6\textwidth]{images/cast.png}
\end{columns}



\end{frame}  
   


              
              
              
              
              
              
              
\end{document}