\subsection{Loss} As discussed in Sec. \ref{sec:theory}, the loss is the core of the problem specification, since it describes the disagreement between a model and the corresponding data. Additionally, it can contain constraints which help further specify the problem. Having the loss as an independent part of the whole workflow is a crucial design feature in \zfit{}: it is the connection between the model, data, and their relation on one side and the minimisation process on the other side. Having the loss as an extra step accomplishes decoupling the former from the latter: a minimiser can take a loss and minimise it \textit{without} knowing anything about the underlying models, data or the actual definition of the loss. Therefore, it is important that the loss knows everything that is needed for a minimisation, as listed in detail in Appendix \ref{appendix:loss defined}. Basic loss implementations like the \pyth{UnbinnedNLL} use a PDF and data to calculate the loss according to Eq. \ref{eq:nll}. With extended PDFs, an additional term is taken into account if using \pyth{ExtendedUnbinnedNLL} instead, as derived in Eq. \ref{eq:extended_likelihood}. Furthermore, additional terms can be added to express prior knowledge about parameters as in Eq. \ref{eq:constraints}. To keep the flexibility, \textit{any} Tensor can be added as a constraint to the loss with the method \pyth{add_constraint}. Alternatively, a custom constraint can be implemented by using the base class \pyth{BaseConstraint}. \zfit{} implements the most often used constraints to improve usability: in the example from Sec. \ref{sec:quickstart} a Gaussian constraint for the parameter $\mu$ could be applied by adding the following line after the creation of the \texttt{nll}. \begin{center} \begin{minipage}{\textwidth} \begin{python} mu = ... # parameter nll = zfit.loss.UnbinnedNLL(gauss, data) mu_constr = zfit.constraint.nll_gaussian(params=mu, mu=6.8, sigma=0.4) nll.add_constraint(mu_constr) \end{python} \end{minipage} \end{center} Typically, some parameters are shared between different fits to different data samples. These can be obtained through a simultaneous fit of all the datasets by creating multiple PDFs with some of the \pyth{Parameter} objects being the same. Since this corresponds to a simple addition of the losses as seen in Eq. \ref{eq:simultaneous_likelihood}, \zfit{} allows to perform precisely this operation. As an example we create two Gaussians and assume to already have their data. Here the $\mu$ is shared while the $\sigma$ is not. Limits and the data are created as in the example of Sec. \ref{sec:quickstart}: \begin{center} \begin{minipage}{\textwidth} \begin{python} mu = zfit.Parameter("mu", 7) sigma1 = zfit.Parameter("sigma", 1.1) sigma2 = zfit.Parameter("sigma", 1.5) gauss1 = zfit.pdf.Gauss(mu=mu, sigma=sigma1, obs=limits) gauss2 = zfit.pdf.Gauss(mu=mu, sigma=sigma2, obs=limits) nll1 = zfit.loss.UnbinnedNLL(gauss1, data1) nll2 = zfit.loss.UnbinnedNLL(gauss2, data2) simul_nll = nll1 + nll2 \end{python} \end{minipage} \end{center} Alternatively, a list of models and their corresponding data can be given to create the loss \begin{center} \begin{minipage}{\textwidth} \begin{python} simul_nll = zfit.loss.UnbinnedNLL([gauss1, gauss2], [data1, data2]) \end{python} \end{minipage} \end{center} There is a special loss available in \zfit{} that gives a flexibility rarely found in other fitting packages: the \pyth{SimpleLoss}. This object lightly wraps any Tensor, which allows to build \textit{any} kind of loss. No dependency on the data structure or the model layout is required and it is completely up to the user. This allows to create not yet implemented losses, such as binned ones, and allows other libraries which build loss functions with TF to simply hook in with this mechanism. Because the \pyth{SimpleLoss} can then be used with the next steps such as the minimisation and error estimation, an other library can therefore use the whole available tools in \zfit{} as well as any library that builds on top of it.