massive overhaul, based on christians comments

This commit is contained in:
hiro98 2020-06-03 17:03:20 +02:00
parent a1a673ac42
commit 62a55180dd
43 changed files with 5777 additions and 2030 deletions

58
latex/figs/logo.pdf_tex Normal file
View file

@ -0,0 +1,58 @@
%% Creator: Inkscape 1.0 (4035a4fb49, 2020-05-01), www.inkscape.org
%% PDF/EPS/PS + LaTeX output extension by Johan Engelen, 2010
%% Accompanies image file 'logo.pdf' (pdf, eps, ps)
%%
%% To include the image in your LaTeX document, write
%% \input{<filename>.pdf_tex}
%% instead of
%% \includegraphics{<filename>.pdf}
%% To scale the image, write
%% \def\svgwidth{<desired width>}
%% \input{<filename>.pdf_tex}
%% instead of
%% \includegraphics[width=<desired width>]{<filename>.pdf}
%%
%% Images with a different path to the parent latex file can
%% be accessed with the `import' package (which may need to be
%% installed) using
%% \usepackage{import}
%% in the preamble, and then including the image with
%% \import{<path to file>}{<filename>.pdf_tex}
%% Alternatively, one can specify
%% \graphicspath{{<path to file>/}}
%%
%% For more information, please see info/svg-inkscape on CTAN:
%% http://tug.ctan.org/tex-archive/info/svg-inkscape
%%
\begingroup%
\makeatletter%
\providecommand\color[2][]{%
\errmessage{(Inkscape) Color is used for the text in Inkscape, but the package 'color.sty' is not loaded}%
\renewcommand\color[2][]{}%
}%
\providecommand\transparent[1]{%
\errmessage{(Inkscape) Transparency is used (non-zero) for the text in Inkscape, but the package 'transparent.sty' is not loaded}%
\renewcommand\transparent[1]{}%
}%
\providecommand\rotatebox[2]{#2}%
\newcommand*\fsize{\dimexpr\f@size pt\relax}%
\newcommand*\lineheight[1]{\fontsize{\fsize}{#1\fsize}\selectfont}%
\ifx\svgwidth\undefined%
\setlength{\unitlength}{191.32199478bp}%
\ifx\svgscale\undefined%
\relax%
\else%
\setlength{\unitlength}{\unitlength * \real{\svgscale}}%
\fi%
\else%
\setlength{\unitlength}{\svgwidth}%
\fi%
\global\let\svgwidth\undefined%
\global\let\svgscale\undefined%
\makeatother%
\begin{picture}(1,0.28986343)%
\lineheight{1}%
\setlength\tabcolsep{0pt}%
\put(0,0){\includegraphics[width=\unitlength,page=1]{logo.pdf}}%
\end{picture}%
\endgroup%

View file

@ -12,7 +12,7 @@
\usepackage[backend=biber, language=english, style=phys]{biblatex}
\usepackage{siunitx}
\usepackage[pdfencoding=auto]{hyperref} % load late
\usepackage{cleveref}
\usepackage[capitalize]{cleveref}
% \usepackage[activate={true,nocompatibility},final,tracking=true,spacing=true,factor=1100,stretch=10,shrink=10]{microtype}
\usepackage{caption}
\usepackage[list=true, font=small,
@ -28,6 +28,7 @@ labelformat=brace, position=top]{subcaption}
\usepackage{fancyvrb}
\usepackage[autostyle=true]{csquotes}
\usepackage{setspace}
\usepackage{newunicodechar}
%% use the current pgfplots
\pgfplotsset{compat=1.16}
@ -81,6 +82,9 @@ labelformat=brace, position=top]{subcaption}
%% Font for headings
\addtokomafont{disposition}{\rmfamily}
%% Minus Sign for Matplotlib
\newunicodechar{}{-}
% Macros
%% qqgg
@ -110,6 +114,7 @@ labelformat=brace, position=top]{subcaption}
%% area hyperbolicus
\DeclareMathOperator{\artanh}{artanh}
\DeclareMathOperator{\arcosh}{arcosh}
%% Fast Slash
\let\sl\slashed

View file

@ -1 +0,0 @@
\(\sigma = \SI{0.0537937\pm 2.55202e-06}{\pico\barn}\)

View file

@ -3,31 +3,32 @@
MC (MC) methods have been and still are one of the most important
tools for numerical calculations in particle physics. Be it for
validating the well established standard model or for making
validating the well established Standard Model or for making
predictions about new theories, MC simulations are the
crucial interface of theory and experimental data, making them
directly comparable. Furthermore horizontal scaling is almost trivial
to implement in MC algorithms, making them well adapted to
modern parallel computing. In this thesis, the use of MC
methods will be traced through from simple integration to the
simulation of proton-proton scattering.
directly comparable.% Furthermore horizontal scaling is almost trivial
% to implement in MC algorithms, making them well adapted to
% modern parallel computing.
In this thesis, the use of MC methods will be traced through from
simple integration to the simulation of proton-proton scattering.
The ``Drosophila'' of this thesis is the quark annihilation into two
The ``Guinea Pig'' of this thesis is the quark annihilation into two
photons \(\qqgg\), henceforth called the diphoton process. It forms an
important background to the higgs decay channel
\(H\rightarrow \gamma\gamma\) and to a dihiggs decay
important background to the Higgs decay channel
\(H\rightarrow \gamma\gamma\) (which was instrumental in its
discovery) and to a dihiggs decay
\(HH\rightarrow b\bar{b}\gamma\gamma\)~\cite{aaboud2018:sf}, while
still being a pure QED process and thus calculable by hand within the
scope of this thesis. The differential and total cross section of this
process is being calculated in leading order in \cref{chap:qqgg} and
the obtained result is compared to the total cross section obtained
with the \sherpa~\cite{Gleisberg:2008ta} event generator, used as
matrix element integrator. In \cref{chap:mc} some simple MC
methods are discussed, implemented and their results
compared. Beginning with a study of MC integration the
\vegas\ algorithm~\cite{Lepage:19781an} is implemented and
evaluated. Subsequently MC sampling methods are explored and
the output of \vegas\ is used to improve the sampling
still being a pure QED process at leading order and thus calculable by
hand within the scope of this thesis. The differential and total cross
section of this process is being calculated in leading order in
\cref{chap:qqgg} and the obtained result is compared to the total
cross section obtained with the \sherpa~\cite{Gleisberg:2008ta} event
generator, used as matrix element integrator. In \cref{chap:mc} some
simple MC methods are discussed, implemented and their results
compared. Beginning with a study of MC integration the \vegas\
algorithm~\cite{Lepage:19781an} is implemented and
evaluated. Subsequently MC sampling methods are explored and the
output of \vegas\ is used to improve the sampling
efficiency. Histograms of observables are generated and compared to
histograms from \sherpa\ using the \rivet~\cite{Bierlich:2019rhm}
analysis framework. \Cref{chap:pdf} deals with proton-proton
@ -46,15 +47,13 @@ effects. The impact of those effects on observables is studied in
\label{sec:convent}
Throughout natural units with \(c=1, \hbar = 1, k_B=1, \varepsilon_0
= 1\) are used unless stated otherwise. Histograms without label on
the y-axis are normalized to unity and to be interpreted as
probability densities.
= 1\) are used unless stated otherwise.
\section{Source Code}%
\label{sec:source}
The (literate) python code, used to generate most of the results and
figures can be found on under
figures can be found under
\url{https://github.com/vale981/bachelor_thesis/} and more
specifically in the subdirectory \texttt{prog/python/qqgg}.
@ -64,8 +63,8 @@ algorithm related functionality as a module. The file
that generates all the results of \cref{chap:mc}. The file
\texttt{parton\_density\_function\_stuff.org} contains all the
computations for \cref{chap:pdf}. The python code makes heavy use of
\href{https://www.scipy.org/}{scipy} (and of course
\href{https://numpy.org/}{numpy}).
\href{https://www.scipy.org/}{scipy}~\cite{2020Virtanen:Sc} (and of
course \href{https://numpy.org/}{numpy}).
%%% Local Variables: ***
%%% mode: latex ***

View file

@ -3,12 +3,15 @@
%%% TeX-master: "../document.tex" ***
%%% End: ***
\chapter{Survey of elementary Monte-Carlo Methods}%
\chapter{Survey of Elementary Monte Carlo Methods}%
\label{chap:mc}
Monte-carlo methods for multidimensional integration and sampling of
Monte Carlo methods for multidimensional integration and sampling of
probability distributions are central tools of modern particle
physics. Therefore some simple methods and algorithms are being
studied and implemented here and will be applied to the results
from \cref{chap:qqgg}. The \verb|python| code for the implementation
can be found in \cref{sec:mcpy}.
studied and implemented here and will be applied to the results from
\cref{chap:qqgg,sec:compsher}. The \verb|python| code for the
implementation can be found as described in \cref{sec:source}. The
sampling and integration intervals, as well as other parameters have
been chosen as in \cref{sec:compsher} been chosen, so that
\result{xs/python/eta} and \result{xs/python/ecm}\!.

View file

@ -1,4 +1,4 @@
\section{Monte-Carlo Integration}%
\section{Monte Carlo Integration}%
\label{sec:mcint}
Consider a function
@ -6,8 +6,8 @@ Consider a function
probability density on
\(\rho\colon \vb{x}\in\Omega\mapsto\mathbb{R}_{\geq 0}\) with
\(\int_{\Omega}\rho(\vb{x})\dd{\vb{x}} = 1\). By multiplying \(f\)
with a one in the fashion of \cref{eq:baseintegral}, the integral
of \(f\) over \(\Omega\) can be interpreted as the expected value
with a one in the fashion of \cref{eq:baseintegral}, the integral of
\(f\) over \(\Omega\) can be interpreted as the expected value
\(\EX{F/\Rho}\) of the random variable \(F/\Rho\) under the
distribution \(\rho\). This is the key to most MC methods.
@ -58,16 +58,20 @@ value of \cref{eq:approxexp} varies around \(I\) with the variance
\frac{f(\vb{x_i})}{\rho(\vb{x_i})}]^2 \label{eq:varI-approx}
\end{align}
The name of the game now is to reduce \(\VAR{F/\Rho}\) to speed up the
convergence of \cref{eq:approxexp} and achieve higher accuracy with
fewer function evaluations. Some ways variance reductions can be
The goal now is to reduce \(\VAR{F/\Rho}\) to speed up the convergence
of \cref{eq:approxexp} and achieve higher accuracy with fewer function
evaluations. There are at least three angles of attack
in~\ref{eq:baseintegral}, namely the distribution \(\rho\), the
variable \(\vb{x}\), and the integration volume
\(\Omega\). Accordingly some ways variance reductions can be
accomplished are choosing a suitable \(\rho\) (importance sampling),
by transforming the integral onto another variable, a combination of
both approaches or by subdividing integration volume into several
sub-volumes of different size while keeping the sample size constant
in all sub-volumes (stratified sampling).\footnote{There are of course
still other methods like the multi-channel method.} Combining ideas
from importance sampling and stratified sampling leads to the \vegas\
by transforming the integral onto another variable or by subdividing
integration volume into several sub-volumes of different size while
keeping the sample size constant in all sub-volumes (stratified
sampling).\footnote{There are of course still other methods like the
multi-channel method.}\footnote{Of course, combinations of these
methods can be applied as well.} Combining ideas from importance
sampling and stratified sampling leads to the \vegas\
algorithm~\cite{Lepage:19781an} that approximates the optimal
distribution of importance sampling by adaptive subdivision of the
integration volume into a grid.
@ -75,17 +79,18 @@ integration volume into a grid.
The convergence of \cref{eq:approxexp} is not dependent on the
dimensionality of the integration volume as opposed to many other
numerical integration algorithms (trapezoid rule, Simpsons rule) that
usually converge like \(N^{-\frac{k}{n}}\) with \(k\in\mathbb{N}\) as
opposed to \(N^{-\frac{k}{n}}\) with MC. Because phase space
integrals in particle physics usually have a high dimensionality,
MC integration is suitable approach there. When implementing
MC methods, the random samples can be obtained through
hardware or software random number generators (RNGs). Most
implementations utilize software RNGs because supply pseudo-random
numbers in a reproducible way, which facilitates deniability and
comparability.~\cite{buckley:2011ge}
usually converge like \(N^{-\frac{k}{n}}\) with
\(k\in\mathbb{N}_{>0}\) and \(n\) being the dimensionality as opposed
to \(N^{-\frac{k}{n}}\) with MC. Because phase space integrals in
particle physics usually have a high dimensionality, MC integration is
a suitable approach there. When implementing MC methods, the random
samples can be obtained through hardware or software random number
generators (RNGs). Most implementations utilize software RNGs because
they supply pseudo-random numbers in a reproducible way, which
facilitates deniability and comparability~\cite{buckley:2011ge}.
% TODO: maybe remove, ask Frank
\subsection{Naive Monte-Carlo Integration and Change of Variables}
\subsection{Naive Monte Carlo Integration and Change of Variables}
\label{sec:naivechange}
The simplest choice for \(\rho\) is given
@ -115,16 +120,15 @@ squared when approximating the integral by the sum.
\begin{equation}
\label{eq:approxvar-I}
\VAR{I} = \abs{\Omega}\int_\Omega f(\vb{x})^2 -
I^2 \dd{\vb{x}} \equiv \abs{\Omega}\cdot\sigma_f^2 \approx
\underbrace{\qty(\frac{I}{\abs{\Omega}})^2}_{=\bar{f}} \dd{\vb{x}} \equiv \abs{\Omega}\cdot\sigma_f^2 \approx
\frac{\abs{\Omega}^2}{N-1}\sum_{i}\qty[f(\vb{x}_i) - \bar{f}]^2
\end{equation}
Applying this method to integrate
\(2\pi\sin(\theta)\cdot\dv{\sigma}{\Omega}\) (see \cref{eq:crossec})
over a \(\theta\) interval, equivalent to \(\eta\in [-2.5, 2.5]\) with
a target accuracy of \(\varepsilon=10^{-3}\) results in
\result{xs/python/xs_mc} with a sample size of
\result{xs/python/xs_mc_N}.
Applying this method to integrate the \(\qqgg\) cross section from
\cref{eq:crossec} over a \(\theta\) interval, equivalent to
\(\eta\in [-2.5, 2.5]\) with a target accuracy of
\(\varepsilon=10^{-3}\) results in \result{xs/python/xs_mc} with a
sample size of \result{xs/python/xs_mc_N}.
Changing variables and integrating \cref{eq:xs-eta} over \(\eta\) with
the same target accuracy yields~\result{xs/python/xs_mc_eta} with a
@ -133,7 +137,7 @@ reduction in variance and sample size can be understood qualitatively
by studying \cref{fig:xs-int-comp}. The differential cross section in
terms of \(\eta\)~(\cref{fig:xs-int-eta}) is less steep than the
differential cross section in terms of
\(\theta\)~(\cref{fig:xs-int-theta}) and takes on significant values
\(\theta\)~(\cref{fig:xs-int-theta}) and takes on large values
over most of the integration interval. In general, the Jacobian
arising in variable transformation has the same effect as the
probability density in importance sampling. It can be shown that
@ -168,7 +172,7 @@ sub-volume is the same~\cite{Lepage:19781an}. In importance sampling,
the optimal probability distribution is given
by \cref{eq:optimalrho}, where \(f(\Omega) \geq 0\) is presumed
without loss of generality. When applying \vegas\ to multi dimensional
integrals, \cref{eq:optimalrho} is usually modified to factorize into
integrals,~\cref{eq:optimalrho} is usually modified to factorize into
distributions for each variable to simplify calculations.
\begin{equation}
@ -177,17 +181,24 @@ distributions for each variable to simplify calculations.
\end{equation}
The idea behind \vegas\ is to subdivide \(\Omega\) into hypercubes
(create a grid), define \(\rho\) as step-function on those hypercubes
and iteratively approximating \cref{eq:optimalrho}, instead of trying
to minimize the variance directly~\cite{Lepage:19781an}. In the end,
the samples are concentrated where \(f\) takes on the highest values
and changes most rapidly. This is done by subdividing the hypercubes
into smaller chunks, based on their contribution to the integral and
then varying the hypercube borders until all hypercubes contain the
same number of chunks. Note that no knowledge about the integrand is
required. The probability density used by \vegas\ is given in
\cref{eq:vegasrho} with \(K\) being the number of hypercubes and
\(\Omega_i\) being the hypercubes themselves.
(create a grid), define \(\rho\) as step-function with constant value
on those hypercubes and iteratively approximating
\cref{eq:optimalrho}, instead of trying to minimize the variance
directly. In the end, the samples are concentrated where \(f\) takes
on the highest values and changes most rapidly. This is done by
subdividing the hypercubes into smaller chunks, based on their
contribution to the integral, which is calculated through MC sampling,
and then varying the hypercube borders until all hypercubes contain
the same number of chunks. So if the contribution of a hypercube is
large, it will be divided into more chunks than others. When the
hypercube borders are then shifted, this hypercube will shrink, while
others will grow by consuming the remaining chunks. Repeating this
step in so called \vegas\ iterations will converge the contribution of
each hypercube to the same value. More details about the algorithm can
be found in~\cite{Lepage:19781an}. Note that no knowledge about the
integrand is required. The probability density used by \vegas\ is
given in \cref{eq:vegasrho} with \(K\) being the number of hypercubes
and \(\Omega_i\) being the hypercubes themselves.
\begin{equation}
\label{eq:vegasrho}
@ -206,18 +217,20 @@ required. The probability density used by \vegas\ is given in
weighting applied (\(f/\rho\)).}
\end{figure}
In one dimension the hypercubes become simple interval
increments. Applying \vegas\ to \cref{eq:crossec} with
This algorithm has been implemented in python and applied to
\cref{eq:crossec}. In one dimension the hypercubes become simple
interval increments and applying \vegas\ to \cref{eq:crossec} with
\result{xs/python/xs_mc_θ_vegas_K} increments yields
\result{xs/python/xs_mc_θ_vegas} with
\result{xs/python/xs_mc_θ_vegas_N} samples. This result is comparable
with tho one obtained by parameter transformation in
\cref{sec:naivechange}. The sample count \(N\) is the total number of
evaluations of \(f\). The resulting increments and the weighted
integrand \(f/\rho\) are depicted in \cref{fig:xs-int-vegas}, along
with the original integrand and it is intuitively clear, that the
variance is being reduced. Smaller increments correspond to higher
sample density and lower weights, flattening out the integrand.
\result{xs/python/xs_mc_θ_vegas_N} function evaluations (including
\vegas\ iterations). This result is comparable with tho one obtained
by parameter transformation in \cref{sec:naivechange}. The sample
count \(N\) is the total number of evaluations of \(f\). The resulting
increments and the weighted integrand \(f/\rho\) are depicted in
\cref{fig:xs-int-vegas}, along with the original integrand and it is
intuitively clear, that the variance is being reduced. Smaller
increments correspond to higher sample density and lower weights,
flattening out the integrand.
Generally the result gets better with more increments, but at the cost
of more \vegas\ iterations. The intermediate values of those

View file

@ -1,26 +1,32 @@
\section{Monte-Carlo Sampling}%
\section{Monte Carlo Sampling}%
\label{sec:mcsamp}
Drawing representative samples from a probability distribution (for
example a differential cross section) results in a set of
\emph{events}, the same kind of data, that is gathered in experiments
and from which one can the calculate samples from the distribution of
other observables without explicit transformation of the
distribution. Here the one-dimensional case is discussed.
\emph{events}. This procedure is called sampling and produces the same
kind of data, that is gathered in experiments and from which one can
then calculate samples from the distribution of other observables
event-by-event without explicit transformation of the
distribution. Furthermore, these samples can be used as the basis for
more involved simulations (see \cref{chap:pheno}). Sampling shares
many characteristics with integration and thus the methods discussed
here use similar terminology and often inherit their fundamental ideas
from the integration methods.
Sampling a multi-dimensional distribution is equivalent to sampling
one dimensional distributions by reducing the distribution itself to
one variable through integration over the remaining variables and
then, keeping the first variable fixed, sampling the other variables
in a likewise manner.
Here the one-dimensional case is discussed. Sampling a
multi-dimensional distribution is equivalent to sampling one
dimensional distributions by reducing the distribution itself to one
variable through integration over the remaining variables and then,
keeping the first variable fixed, sampling the other variables in a
likewise manner.
Consider a function \(f\colon x\in\Omega\mapsto\mathbb{R}_{\geq 0}\)
where \(\Omega = [0, 1]\) without loss of generality. Such a function
is proportional to a probability density \(\tilde{f}\). When \(X\) is
a uniformly distributed random variable on~\([0, 1]\) (which can be
easily generated) then a sample \({x_i}\) of this variable can be
easily generated), then a sample \({x_i}\) of this variable can be
transformed into a sample of \(Y\sim\tilde{f}\). Let \(x\) be a single
sample of \(X\), then a sample \(y\) of \(Y\) can be obtained by
sample of \(X\). A sample \(y\) of \(Y\) can be obtained by
solving \cref{eq:takesample} for \(y\).
\begin{equation}
@ -35,10 +41,11 @@ to \cref{eq:takesample}, the probability that
\int_{0}^{y'+\dd{y}'}f(x')\dd{x'}]\) which is
\(A^{-1}\qty(\int_{0}^{y'+\dd{y}'}f(x')\dd{x'} -
\int_{0}^{y'}f(x')\dd{x'}) = A^{-1} f(y')\dd{y}'\). So \(y\) is really
distributed according to \(f/A\).
distributed according to \(f/A=\tilde{f}\).
If the antiderivative \(F\) of is known, then the solution
of \cref{eq:takesample} is given by \cref{eq:solutionsamp}.
If the antiderivative \(F\) (and its inverse) of \(f\) is known, then
the solution of \cref{eq:takesample} is given by
\cref{eq:solutionsamp}.
\begin{equation}
\label{eq:solutionsamp}
@ -51,9 +58,9 @@ obtained numerically or one can change variables to simplify.
\subsection{Importance Sampling and Hit or Miss}%
\label{sec:hitmiss}
If integrating \(f\) and/or inverting \(F\) is too expensive or a
fully \(f\)-agnostic method is desired, the problem can be
reformulated by introducing a positive function
If integrating \(f\) or inverting \(F\) is too expensive or a fully
\(f\)-agnostic method is desired, the problem can be reformulated by
introducing a positive function
\(g\colon x\in\Omega\mapsto\mathbb{R}_{\geq 0}\) with
\(\forall x\in\Omega\colon g(x)\geq f(x)\).
@ -71,8 +78,9 @@ probability~\(f/g\), so that \(g\) cancels out. This is known as the
\end{equation}
The thus obtained samples are then distributed according to \(f/B\)
and the total probability of accepting a sample (efficiency
\(\mathfrak{e}\)) is given by hat \cref{eq:impsampeff} holds.
and the total probability of accepting a sample, also called the
efficiency \(\mathfrak{e}\), is given by hat \cref{eq:impsampeff}
holds.
\begin{equation}
\label{eq:impsampeff}
@ -80,13 +88,7 @@ and the total probability of accepting a sample (efficiency
\end{equation}
The closer the volumes enclosed by \(g\) and \(f\) are to each other,
higher is \(\mathfrak{e}\). This method is called importance sampling
\begin{wrapfigure}[15]{l}{.5\textwidth}
\plot{xs_sampling/upper_bound}
\caption{\label{fig:distcos} The distribution \cref{eq:distcos} and an upper bound of
the form \(a + b\cdot x^2\).}
\end{wrapfigure}
higher is \(\mathfrak{e}\).
Choosing \(g\) like \cref{eq:primitiveg} and looking back at
\cref{eq:solutionsamp} yields \(y = x\cdot A\), so that the sampling
@ -95,14 +97,20 @@ accepting them with the probability \(f(x)/g(x)\). The efficiency of
this approach is related to how much \(f\) differs from
\(f_{\text{max}}\) which in turn related to the variance of
\(f\). Minimizing variance will therefore improve sampling
performance. The method can also in higher dimensions be used without
modification.
performance. The method can also be used in higher dimensions without
modification and has again been implemented and evaluated.
\begin{equation}
\label{eq:primitiveg}
g=\max_{x\in\Omega}f(x)=f_{\text{max}}
\end{equation}
\begin{wrapfigure}[15]{l}{.5\textwidth}
\plot{xs_sampling/upper_bound}
\caption{\label{fig:distcos} The distribution \cref{eq:distcos} and an upper bound of
the form \(a + b\cdot x^2\).}
\end{wrapfigure}
Using the upper bound defined in \cref{eq:primitiveg} with the
distribution for \(\cos\theta\) derived from the differential cross
section \cref{eq:crossec} given in \cref{eq:distcos}
@ -115,23 +123,21 @@ of~\result{xs/python/naive_th_samp}.
\end{equation}
This very low efficiency stems from the fact, that \(f_{\cos\theta}\)
is a lot smaller than its upper bound for most of the sampling
interval.
is a lot smaller than its maximum in most of the sampling interval.
Utilizing an upper bound of the form \(a + b\cdot x^2\) with \(a, b\)
constant improves the efficiency
to~\result{xs/python/tuned_th_samp}. The distribution, as well as the
upper bound are depicted in \cref{fig:distcos}.
\subsection{Importance Sampling through Change of Variables}%
\label{sec:importsamp}
When transforming \(f\) to a new variable \(y=y(x)\) one arrives at
\cref{eq:transff} and may reduce variance, analogous to
\cref{sec:naivechange}. Transforming the distribution in a beneficial
way in the context of sampling a distribution is called
\emph{importance sampling}.
way in is an alternative method of performing \emph{importance
sampling}.
\begin{equation}
\label{eq:transff}
@ -140,31 +146,30 @@ way in the context of sampling a distribution is called
The optimal transformation would be the solution of \(y = F(x)\)
(\(F\) being the antiderivative), so that
\(f(x(y)) \cdot \dv{x(y)}{y} = 1\), which is the same as sampling in
the way described in \cref{eq:solutionsamp}. This is no coincident as
it can be shown, that this method is equivalent with the method
described in \cref{sec:hitmiss} \cite{Lepage:19781an}. The technical
trade-off of this method is that one has to chain it with some other
sampling technique (\(\tilde{f}\) still has to be sampled). On the
other hand the step of generating samples distributed according to
\(g\) falls away.
\(f(x(y)) \cdot \dv{x(y)}{y} = 1\). But transforming \(f\) in this way
is the same as solving \cref{eq:takesample} which is a problem that
has been addressed in \cref{sec:hitmiss}. The difference here is, that
we restate the sampling problem in \(y\) space, which separates the
step of converting our \(y\) samples to \(x\) samples away from the
sampling process (see \cref{sec:obs}). Solving \cref{eq:takesample}
may now be easier, or applying the hit or miss method may be more
efficient.
When transforming the differential cross-section to the pseudo
rapidity \(\eta\) the efficiency of the hit or miss method rises
to~\result{xs/python/eta_eff} so applying this method in conjunction
with others is worthwhile.
with others is worthwhile (see \cref{fig:xs-int-comp}).
\subsection{Hit or Miss and \vegas}%
\label{sec:stratsamp}
Finding a suitable upper bound or variable transform requires effort
and detail knowledge about the distribution and is hard to
automate\footnote{Sherpa does in fact do this by looking at the
propagators in the matrix elements.}. Revisiting the idea
behind \cref{eq:takesample2d} but looking at probability density
\(\rho\) on \(\Omega\) leads to a slight reformulation of the method
discussed in \cref{sec:hitmiss}. Note that without loss of generality
one can again choose \(\Omega = [0, 1]\).
Finding a suitable upper bound or variable transformation requires
effort and detail knowledge about the distribution and is hard to
automate. Revisiting the idea behind \cref{eq:takesample2d} but
looking at a probability density \(\rho\) on \(\Omega\) leads to a
slight reformulation of the method discussed in
\cref{sec:hitmiss}. Note that without loss of generality one can again
choose \(\Omega = [0, 1]\).
Define \(h=\max_{x\in\Omega}f(x)/\rho(x)\), take a sample
\(\{\tilde{x}_i\}\sim\rho\) distributed according to \(\rho\) and
@ -172,11 +177,7 @@ accept each sample point with the probability
\(f(x_i)/(\rho(x_i)\cdot h)\). This is very similar to the procedure
described in \cref{sec:hitmiss} with \(g=\rho\cdot h\), but here the
step of generating samples distributed according to \(\rho\) is left
out.
The important benefit of this method is, that step of generating
samples according to some other function \(g\) is no longer
necessary. This is useful when samples of \(\rho\) can be obtained
out as these samples are assumed to be available or can be obtained
with little effort (see below). The efficiency of this method is given
by \cref{eq:strateff}.
@ -200,20 +201,21 @@ constraint \(\int_0^1\rho(x)\dd{x}=1\) to be considered.
The closer \(h\) approaches \(A\) the better the efficiency gets. In
the optimal case \(\rho=f/A\) and thus \(h=A\) or
\(\mathfrak{e} = 1\). Now this distribution can be approximated in the
way discussed in \cref{sec:mcintvegas} by using the hypercubes found
by~\vegas and simply generating the same number of uniformly
distributed samples in each hypercube (stratified sampling). The
distribution \(\rho\) takes on the form \cref{eq:vegasrho}. The
effect of this approach is visualized in \cref{fig:vegasdist} and the
resulting sampling efficiency \result{xs/python/strat_th_samp} (using
\(\mathfrak{e} = 1\). This distribution can be approximated in the way
discussed in \cref{sec:mcintvegas} by using the hypercubes found
by~\vegas\ and simply generating the same number of uniformly
distributed samples in each hypercube. The distribution \(\rho\) takes
on the form \cref{eq:vegasrho}. The effect of this approach is
visualized in \cref{fig:vegasdist} and the resulting sampling
efficiency \result{xs/python/strat_th_samp} (using
\result{xs/python/vegas_samp_num_increments} increments) is a great
improvement over the hit or miss method in \cref{sec:hitmiss}. By using
more increments better efficiencies can be achieved, although the
run-time of \vegas\ increases. The advantage of \vegas\ in this
situation is, that the computation of the increments has to be done
only once and can be reused. Furthermore, no special knowledge about
the distribution \(f\) is required.
improvement over the hit or miss method in \cref{sec:hitmiss} and even
surpasses the variable transformation to \(\eta\). By using more
increments better efficiencies can be achieved, although the run-time
of \vegas\ increases. The advantage of \vegas\ in this situation is,
that the computation of the increments has to be done only once and
can be reused. Furthermore, no special knowledge about the
distribution \(f\) is required.
\begin{figure}[ht]
\centering
@ -223,8 +225,7 @@ the distribution \(f\) is required.
differential cross-section and the \vegas-weighted
distribution]{\label{fig:vegasdist} The distribution for
\(\cos\theta\) (see \cref{eq:distcos}) and the \vegas-weighted
distribution. The inc It is intuitively clear, how variance is
being reduced.}
distribution.}
\end{subfigure}
\begin{subfigure}{.49\textwidth}
\plot{xs_sampling/vegas_rho}
@ -237,6 +238,45 @@ the distribution \(f\) is required.
and weighting distribution.}
\end{figure}
\subsection{Stratified Sampling}
\label{sec:stratsamp-real}
Yet another approach is to subdivide the sampling volume \(\Omega\)
into \(K\) sub-volumes \(\Omega_i\subset\Omega\) and then take a
number of samples from each volume proportional to the integral of the
function \(f\) in that volume. This is known as \emph{stratified
sampling}. The advantage of this method is, that it is now possible
to optimize the sampling in each sub-volume independently.
Let \(N\) be the total sample count (\(N\gg 1\)),
\(A_i = \int_{\Omega_i}f(x)\dd{x}\) and \(A=\sum_iA_i\).
Then we can calculate the total efficiency of taking \(N_i=A_i/A \cdot N\)
samples in each is then given by \cref{eq:rstrateff}.
\begin{equation}
\label{eq:rstrateff}
\mathfrak{e} = \frac{\sum_i N_i}{\sum_i N_i/\mathfrak{e}_i} =
\frac{\sum_i A_i/A\cdot N}{\sum_i A_i/A\cdot N/ \mathfrak{e}_i} = \frac{\sum_i A_i}{\sum_i A_i/\mathfrak{e}_i}
\end{equation}
In the case when all \(\mathfrak{e}_i\) are the same, the total
efficiency is the same as the individual efficiencies. In the case of
one efficiency being much smaller then the others, this efficiency
dominates the overall efficiency (assuming somewhat similar
\(A_i\)). So in general one should optimize so that the individual
efficiencies are roughly the same. Using the \(\Omega_i\) generated by
\vegas\ has the advantage, that this requirement can be approximated
and the \(A_i\) have already been obtained by \vegas. The technical
difficulty in the implementation is the way in which sample points get
distributed among the sub-volumes. The pitfall here is that the
\(A_i\) (and the upper bounds for the hit-or-miss method) have to be
determined increasingly accurate with growing sample size.
This method will be applied to multi-dimensional sampling in
\cref{sec:pdf_results}.
\subsection{Observables}%
\label{sec:obs}
@ -245,11 +285,10 @@ observables can be calculated from those samples without having to
transform the distribution into new variables. This is due to the
discrete nature of the samples. Suppose there is an observable
\(\gamma\colon\Omega\mapsto\mathbb{R}\). Now to take a sample
\(\{x_i\}\) of \(\gamma\) we sample \(f\)\footnote{As defined
in \cref{sec:mcsamp}.} and convert the sample values by simply
applying \(\gamma\). This is equivalent to
substituting \(y=\gamma^{-1}(z)\) in \cref{eq:takesample} and solving
for \(z\).
\(\{x_i\}\) of \(\gamma\) we sample \(f\) and convert the sample
values by simply applying \(\gamma\), where \(f\) is as defined in
\cref{sec:mcsamp}. This is equivalent to substituting
\(y=\gamma^{-1}(z)\) in \cref{eq:takesample} and solving for \(z\).
The probability that \(z\in[z', z'+\dd{z'}]\) now is the same as the
probability that
@ -273,8 +312,8 @@ as described in \cref{eq:observables}.
\begin{align}
\label{eq:observables}
\pt &= \sqrt{(p_1)^2+(p_2)^2} & \eta &=
\frac{\abs{\vb{p}}}{\pt}\cdot\sign(p^3)
\pt &= \sqrt{(p_x)^2+(p_y)^2} & \eta &=
\arcosh\qty(\frac{\abs{\vb{p}}}{\pt})\cdot\sign(p_z)
\end{align}
The histograms in \cref{fig:histos} have been created by generating
@ -284,7 +323,7 @@ contains reference histograms created by generating events with
toolkit~\cite{Bierlich:2019rhm}. The utilized analysis can be found
in \cref{sec:simpdiphotriv}.
\begin{figure}[hb]
\begin{figure}[ht]
\centering \plot{xs_sampling/diff_xs_p_t}
\caption{\label{fig:diff-xs-pt} The differential cross section
transformed to \(\pt\).}
@ -309,53 +348,17 @@ in \cref{sec:simpdiphotriv}.
\end{figure}
Where \cref{fig:histeta} shows clear resemblance of
\cref{fig:xs-int-eta}, the sharp peak in \cref{fig:histpt} around
\(\pt=\SI{100}{\giga\electronvolt}\) seems surprising. When
transforming the differential cross section to \(\pt\) it can be seen
in \cref{fig:diff-xs-pt} that there really is a singularity at
\(\pt =\abs{\vb{p}}\). This singularity will vanish once considering a
more realistic process (see \cref{chap:pdf}). Furthermore the
histograms \cref{fig:histeta,fig:histpt} are consistent
with their \rivet-generated counterparts and are therefore considered
valid.
\subsection{Stratified Sampling}
\label{sec:stratsamp-real}
A different approach is to subdivide the sampling volume \(\Omega\)
into \(K\) sub-volumes \(\Omega_i\subset\Omega\) and then take a
number of samples from each volume proportional to the integral of the
function \(f\)\footnote{As defined in \cref{sec:mcsamp}.} in that
volume. The advantage of this method is, that it is now possible to
optimize the sampling in each sub-volume independently.
Let \(N\) be the total sample count (\(N\gg 1\)),
\(A_i = \int_{\Omega_i}f(x)\dd{x}\) and \(A=\sum_iA_i\).
Then we can calculate the total efficiency of taking \(N_i=A_i/A \cdot N\)
samples in each is then given by \cref{eq:rstrateff}.
\begin{equation}
\label{eq:rstrateff}
\mathfrak{e} = \frac{\sum_i N_i}{\sum_i N_i/\mathfrak{e}_i} =
\frac{\sum_i A_i/A\cdot N}{\sum_i A_i/A\cdot N/ \mathfrak{e}_i} = \frac{\sum_i A_i}{\sum_i A_i/\mathfrak{e}_i}
\end{equation}
In the case when all \(\mathfrak{e}_i\) are the same, the total
efficiency is the same as the individual efficiencies. In the case of
one efficiency being much smaller then the others, this efficiency
dominates the overall efficiency (assuming somewhat similar
\(A_i\)). So in general one should optimize so that the individual
efficiencies are roughly the same. Using the \(\Omega_i\) generated by
\vegas\ has the advantage, that exactly this requirement is fulfilled
and the \(A_i\) have already been obtained by \vegas. The technical
difficulty in the implementation is the way in which sample points get
distributed among the sub-volumes. The pitfall here is that the
\(A_i\) (and the upper bounds for the hit-or-miss method) have to be
determined increasingly accurate with growing sample size.
This method will be applied to multi-dimensional sampling in
\cref{sec:pdf_results}.
\cref{fig:xs-int-eta}, the peak and the rise before this peak in
\cref{fig:histpt} around \(\pt=\SI{100}{\giga\electronvolt}\) seems
surprising and opens the possibility for the production of hard
transverse photons. When transforming the differential cross section
to \(\pt\) it can be seen in \cref{fig:diff-xs-pt} that there really
is a singularity at \(\pt = \ecm\), due to a term
\(1/\sqrt{1-(2\cdot \pt/\ecm)^2}\) stemming from the Jacobian
determinant. This singularity will vanish once considering a more
realistic process (see \cref{chap:pdf}). Furthermore the histograms
\cref{fig:histeta,fig:histpt} are consistent with their
\rivet-generated counterparts and are therefore considered valid.
%%% Local Variables:
%%% mode: latex

View file

@ -5,13 +5,13 @@
Because free quarks do not occur in nature, one has to study the
scattering of hadrons to obtain experimentally verifiable
results. Hadrons are usually modeled as consisting of multiple
\emph{partons} using Parton Density Functions (PDFs). By applying a
simple PDF, the cross section for the process \(\ppgg\) on the
matrix-element~\cite[14]{buckley:2011ge} level\footnote{Neglecting the
remnants and other processes like parton showers, primordial
transverse momentum and multiple interactions.} and event samples of
that process have been obtained. These results are once again be
compared with results from \sherpa.
\emph{partons} (i.e. quarks and gluons) using Parton Density Functions
(PDFs). By using a leading order PDF, the cross section for the
process \(\ppgg\) on the matrix-element~\cite[14]{buckley:2011ge}
level\footnote{Neglecting the remnants and other processes like parton
showers, primordial transverse momentum and multiple interactions.}
and event samples of that process are obtained. These results are
being compared with results from \sherpa.
%%% Local Variables:
%%% mode: latex

View file

@ -2,7 +2,7 @@
\label{sec:lab_xs}
To utilize \cref{eq:pdf-xs} for modeling the~\(\ppgg\) process and to
generate event samples the results of \cref{chap:qqgg} have to be
generate event samples, the results of \cref{chap:qqgg} have to be
transformed into the center of momentum frame of the colliding
protons. Quantities in the center of momentum frame of the partons
will be starred (like \(x^\ast\)).

View file

@ -2,10 +2,11 @@
\label{sec:pdf_basics}
Parton Density Functions encode, restricting considerations to leading
order, the probability to \emph{encounter} a constituent parton (quark
or gluon) of a hadron with a certain momentum fraction \(x\) at a
certain factorization scale \(Q^2\). PDFs are normalized according to
\cref{eq:pdf-norm}, where the sum runs over all partons.
order, the probability to encounter a constituent parton (quark or
gluon) of a hadron with a certain momentum fraction \(x\) at a certain
factorization scale \(Q^2\) in a scattering process. PDFs are
normalized according to \cref{eq:pdf-norm}, where the sum runs over
all partons.
\begin{equation}
\label{eq:pdf-norm}
@ -14,23 +15,24 @@ certain factorization scale \(Q^2\). PDFs are normalized according to
More precisely \({f_i}\) denotes a PDF set, which is referred to
simply as PDF in the following. PDFs can not be derived from first
principles (at the moment) and have to be determined experimentally
for low \(Q^2\) and are evolved to higher \(Q^2\) through the
\emph{DGLAP} equations~\cite{altarelli:1977af} at different orders of
perturbation theory. In deep inelastic scattering \(Q^2\) is just the
negative over the momentum transfer \(-q^2\). For more complicated
processes \(Q^2\) has to be chosen in a way that reflects the
\emph{momentum resolution} of the process. If the perturbation series
behind the process would be expanded to the exact solution, the
principles and have to be determined experimentally for low \(Q^2\)
and are evolved to higher \(Q^2\) through the \emph{DGLAP}
equations~\cite{altarelli:1977af} at different orders of perturbation
theory. In deep inelastic scattering \(Q^2\) is just the negative
over the momentum transfer \(-q^2\). For more complicated processes
\(Q^2\) has to be chosen in a way that reflects the
\emph{energy-momentum scale} of the process. If the perturbation
series behind the process would be expanded to the exact solution, the
dependence on the factorization scale vanishes. In lower orders, one
has to choose the scale in a \emph{physically
meaningful}\footnote{That means: not in an arbitrary way.} way,
which reflects characteristics of the process~\cite{altarelli:1977af}.
In the case of \(\qqgg\) the mean of the Mandelstam variables \(\hat{t}\)
and \(\hat{u}\), which is equal to \(\hat{s}/2\), can be used. This
choice is lorentz-invariant and reflects the s/u-channel nature of the
process.
In the case of \(\qqgg\) the mean of the Mandelstam variables
\(\hat{t}\) and \(\hat{u}\), which is equal to \(\hat{s}/2\), can be
used. This choice is lorentz-invariant and reflects the s/u-channel
nature of the process, although the \(\pt\) of photon would also have
been a good choice~\cite[18]{buckley:2011ge}.
The (differential) hadronic cross section for scattering of two
partons in equal hadrons is given in \cref{eq:pdf-xs}. Here \(i,j\)

View file

@ -25,10 +25,10 @@ is justified in \cref{sec:pdf_basics} and formulated in
\end{gather}
The PDF set being used in the following has been fitted (and
developed) at leading order and is the central member of the PDF set
\verb|NNPDF31_lo_as_0118| provided by \emph{NNPDF} collaboration and
accessed through the \lhapdf\
library~\cite{NNPDF:2017pd}\cite{Buckley:2015lh}.
\emph{DGLAP} developed) at leading order and is the central member of
the PDF set \verb|NNPDF31_lo_as_0118| provided by \emph{NNPDF}
collaboration and accessed through the \lhapdf\
library~\cite{NNPDF:2017pd,Buckley:2015lh}.
\subsection{Cross Section}%
\label{sec:ppxs}
@ -36,9 +36,9 @@ library~\cite{NNPDF:2017pd}\cite{Buckley:2015lh}.
The distribution \cref{eq:weighteddist} can now be integrated to
obtain a total cross-section as described in \cref{sec:mcint}. For
the numeric analysis a proton beam energy of
\result{xs/python/pdf/e_proton} has been chosen, in accordance to
\lhc{} beam energies. As for the cuts, \result{xs/python/pdf/eta} and
\result{xs/python/pdf/min_pT} have been set. Integrating
\result{xs/python/pdf/e_proton} has been chosen, in resemblance to
\lhc{} beam energies. The cuts \result{xs/python/pdf/eta} and
\result{xs/python/pdf/min_pT} have been imposed. Integrating
\cref{eq:weighteddist} with respect to those cuts using \vegas\ yields
\result{xs/python/pdf/my_sigma} which is compatible with
\result{xs/python/pdf/sherpa_sigma}, the value \sherpa\ gives.
@ -47,19 +47,19 @@ the numeric analysis a proton beam energy of
\label{sec:ppevents}
Generating events of \(\ppgg\) is very similar in principle to
sampling partonic cross section. As before, the range of the \(\eta\)
parameter has to be constrained to obtain physical results. Because
the absolute values of the pseudo rapidities of the two final state
photons are not equal in the lab frame, the shape of the
integration/sampling volume differs from a simple hypercube
sampling the partonic cross section. As before, the range of the
\(\eta\) parameter has to be constrained to obtain physical
results. Because the absolute values of the pseudo rapidities of the
two final state photons are not equal in the lab frame, the shape of
the integration/sampling volume differs from a simple hypercube
\(\Omega\). Furthermore, for the massless limit to be applicable the
center of mass energy of the partonic system must be much greater than
the quark masses. This can be implemented by demanding the transverse
momentum \(p_T\) of a final state photon to be greater than
approximately~\SI{20}{\giga\electronvolt}. A restriction (cut) on
\(p_T\) is suitable because detectors are usually only sensitive above
a certain \(p_T\) threshold and the final state particles have to be
isolated from the beams.
momentum \(p_T\) of a final state photon to be greater
than~\SI{20}{\giga\electronvolt}. A restriction (cut) on \(p_T\) is
suitable because detectors are usually only sensitive above a certain
\(p_T\) threshold and the final state particles have to be isolated
from the beams.
The resulting distribution (without cuts) is depicted in
\cref{fig:dist-pdf} for fixed \(x_2\) and in
@ -67,7 +67,7 @@ The resulting distribution (without cuts) is depicted in
the distribution retains some likeness with the partonic distribution
(see \cref{fig:xs-int-eta}) but gets suppressed for greater values of
\(x_1\). The overall shape of the distribution is clearly highly
sub-optimal for hit-or-miss sampling, only having significant values
sub-optimal for hit-or-miss sampling, only having significant magnitude
when \(x_1\) or \(x_2\) are small (\cref{fig:dist-pdf-fixed-eta}) and
being very steep.
@ -94,17 +94,43 @@ To remedy that, one has to use a more efficient sampling algorithm
(\vegas) or impose very restrictive cuts. The self-coded
implementation used here can be found in \cref{sec:pycode} and employs
stratified sampling (as discussed in \cref{sec:stratsamp-real}) and
the hit-or-miss method. Because the stratified sampling requires very
accurate upper bounds, they have been overestimated by
\result{xs/python/pdf/overesimate}, which lowers the efficiency
slightly but reduces bias. The MC integrator was used to
estimate the location of the maximum in each hypercube and then this
estimate was improved by gradient ascend\footnote{Which becomes
problematic, when performed close to a cut.}.
the hit-or-miss method. The matrix element (ME) and cuts are
implemented using \texttt{cython}~\cite{behnel2011:cy} to obtain
better performance as these are evaluated very often. The ME and the
cuts are then convolved with the PDF (as in \cref{eq:weighteddist})
and wrapped into a simple function with a generic interface and
plugged into the \vegas\ implementation which then computes the
integral, grid, individual contributions to the grid and rough
estimates of the maxima in each hypercube. In principle the code could
be generalized to other processes by simply redefining the matrix
elements, as no other part of the code is process specific. The cuts
work as simple \(\theta\)-functions, which has the advantage, that the
maximum for hit or miss can be chosen with respect to those cuts. On
the other hand, this method introduces discontinuity into the
integrand, which is problematic for numeric maximizers. The estimates
of the maxima, provided by the \vegas\ implementation used as the
starting point for a gradient ascend maximizer. In this way, the
discontinuities introduced by the cuts got circumvented. Because the
stratified sampling requires very accurate upper bounds, they have
been overestimated by \result{xs/python/pdf/overesimate}\!, which
lowers the efficiency slightly but reduces bias. The sampling
algorithm chooses hypercubes randomly in accordance to their
contribution to the integral by generating a uniformly distributed
random number \(r\in [0,1]\) and summing the weights of the hypercubes
until the sum exceeds this number. The last hypercube in this sum is
then chosen and one sample is obtained. Taking more than one sample
can improve performance, but introduces bias, as hypercubes with low
weight may be oversampled. At various points, the
\texttt{numba}~\cite{lam2015:po} package has been used to just-in-time
compile code to increase performance. The python
\texttt{multiprocessing} module is used to parallelize the sampling
and exploit all CPU cores. Although the \vegas\ step is very time
intensive, but the actual sampling performance is in the same order of
magnitude as \sherpa, but some parameters have to be manually tuned.
A sample of \result{xs/python/pdf/sample_size} events has been
generated both in \sherpa\ and through own code. The resulting
histograms of some observables are depicted in
generated both in \sherpa\ (with the same cuts) and through own
code. The resulting histograms of some observables are depicted in
\cref{fig:pdf-histos}. The sampling efficiency achieved was
\result{xs/python/pdf/samp_eff} using a total of
\result{xs/python/pdf/num_increments} hypercubes. As can be seen, the
@ -119,42 +145,41 @@ has also smoothed out the jacobian peak seen in \cref{fig:histpt}.
Furthermore new observables have been introduced. The invariant mass
of the photon pair
\(m_{\gamma\gamma} = (p_{\gamma,1} + p_{\gamma,1})^2\) is the center
\(m_{\gamma\gamma} = (p_{\gamma,1} + p_{\gamma,2})^2\) is the center
of mass energy of the partonic system that produces the photons (see
\cref{eq:ecm_partons}) and proportional to the product of the momentum
fractions of the partons. \Cref{fig:pdf-inv-m} shows, that the vast
majority of the reactions take place at a rather low c.m. energy. Due
to the \(\pt\) cuts the first bin is slightly lower then the second.
majority of the reactions take place at a rather low c.m. energy,
owing to the high weights of the PDF at small \(x\) values. Due to the
\(\pt\) cuts the first bin is slightly lower then the second.
The cosines of the scattering angles in the labe frame and the
Collins-Soper (CS) frame are defined in
\cref{eq:sangle,eq:sangle-cs}. The scattering angle is just the angle
between one photon and the photons and the z axis in the c.m. frame if
this frame can be reached by a boost along the z axis\footnote{Or me
between one photon and the z-axis (beam axis) in the c.m. frame if
this frame can be reached by a boost along the z-\footnote{Or me
generally, in a z-boosted frame where the angles of the two photons
are the same.}. Here, the partons are assumed to have no transverse
momentum and therefore the system is symmetric around the beam axis
and therefore this boost is possible. When allowing transverse parton
momenta, as will be done in % TODO: REFERENCE
this symmetry goes away. Defining the z-axis as one beam axis in a
frame would be a quite arbitrary choice that disrespects the symmetry
of the two beams considered here (same energy, identical protons).
Also the random direction of the transverse momentum can add noise
that does not contain much information. The CS frame is defined as the
rest frame of the two outgoing photons in which the z-axis bisects the
momentum and the system is symmetric around the beam axis and
therefore this boost is possible. When allowing transverse parton
momenta, as will be done in \cref{chap:pheno} this symmetry goes
away. Defining the z-axis as one beam axis in the c.m. frame would be
quite an arbitrary choice that disrespects the symmetry of the two
beams considered here (same energy, identical protons). Also the
random direction of the transverse momentum can add noise that does
not contain much information. The CS frame is defined as the rest
frame of the two outgoing photons in which the z-axis bisects the
angle between the first beam momentum and the inverse momentum of the
second beam. The azimuth angle is measure with respect to a vector
perpendicular to the plane of the beams (which is parallel to the
transverse momentum in the lab frame). In this frame, which was
originally chosen to simplify the extension of the Drell-Yan parton
model to transverse parton momenta~\cite{collins:1977an}, some
symmetry is restored and the study of effects of transverse parton
momenta is facilitated. Because of the above-mentioned symmetry, the
histograms in \cref{fig:pdf-o-angle,fig:pdf-o-angle-cs} are the
same. One would naively expect some likeness to \cref{fig:distcos} but
the cuts imposed alter the distribution quite considerably, cutting of
the \(\cos\theta^\ast\rightarrow 1\)
region. % TODO: come back to that in next chapter
second beam. In this frame, which was originally conceived to simplify
the extension of the Drell-Yan parton model to transverse parton
momenta~\cite{collins:1977an}, some symmetry is restored and the study
of effects of transverse parton momenta is facilitated. Because of the
above-mentioned symmetry, the histograms in
\cref{fig:pdf-o-angle,fig:pdf-o-angle-cs} are the same. One would
naively expect some likeness to \cref{fig:distcos} but the cuts
imposed alter the distribution quite considerably, cutting of the
\(\cos\theta^\ast\rightarrow 1\) region.
% TODO: come back to that in next chapter TODO: graphic (tikz)
\begin{align}
\cos\theta^\ast &= \tanh\frac{\eta_1 - \eta_2}{2} \label{eq:sangle}\\

View file

@ -12,18 +12,17 @@ or less hard processes (Multiple Interactions, MI) and affect the hard
process through color correlation. All of the processes not directly
connected to the hard process are called the underlying event and have
to be taken into account to generate events that can be compared with
experimental data, as they form a measurable background. Finally the
partons from the showers recombine into hadrons (Hadronization) due to
confinement. This last effect doesn't produce diphoton-relevant
background directly, but affects photon
experimental data. Finally the partons from the showers recombine into
hadrons (Hadronization) due to confinement. This last effect doesn't
produce diphoton-relevant background directly, but affects photon
isolation.~\cite[11]{buckley:2011ge} % TODO: describe isolation
These effects can be calculated or modeled on an per-event base by
modern monte-carlo event generators like \sherpa. This has been done
modern monte-carlo event generators like \sherpa\footnote{But these
calculations and models are always approximations.}. This is done
for the diphoton process in a gradual way described in
\cref{sec:setupan}. Histograms of observables have been generated and
are being discussed in \cref{sec:disco}.
\cref{sec:setupan}. Histograms of observables are generated and are
being discussed in \cref{sec:disco}.
%%% Local Variables:
%%% mode: latex

View file

@ -7,15 +7,15 @@ are incremental in the sense that each subsequent configuration
extents the previous one.
\begin{description}
\item[Basic] The hard process on parton level as used in \cref{sec:pdf_results}.
\item[PS] The shower generator of \sherpa, \emph{CSS} (dipole-shower),
\item[LO] The hard process on parton level as used in \cref{sec:pdf_results}.
\item[LO+PS] The shower generator of \sherpa, \emph{CSS} (dipole-shower),
is activated and simulates initial state radiation, as there are no
partons in the final state yet.
\item[PS+pT] The beam remnants and intrinsic parton
\item[LO+PS+pT] The beam remnants and intrinsic parton
\(\pt\) are simulated, giving rise to final state radiation.
\item[PS+pT+Hadronization] A cluster hadronization model
\item[LO+PS+pT+Hadronization] A cluster hadronization model
implemented in \emph{Ahadic} is activated.
\item[PS+pT+Hadronization+MI] Multiple interactions based on the
\item[LO+PS+pT+Hadronization+MI] Multiple interactions based on the
Sj\"ostrand-van-Zijl are simulated.
\end{description}
@ -31,14 +31,19 @@ as \SI{6500}{\giga\electronvolt} to resemble \lhc\ conditions.
The analysis is similar to the one used in \cref{sec:ppevents} with
the addition of the observable of total transverse momentum of the
photon pair, which now can be non-zero. Because the final state now
contains multiple photons, the two photons with the highest \(\pt\)
(leading photons) are selected. Furthermore a cone of
potentially contains additional photons from hadron decays, the
analysis only selects prompt photons with the highest \(\pt\) (leading
photons). Furthermore a cone of
\(R = \sqrt{\qty(\Delta\varphi)^2 + \qty(\Delta\eta)^2} = 0.4\) around
each photon must not contain more than \SI{4.5}{\percent}
(\(+ \SI{6}{\giga\electronvolt}\)), so that photons stemming from
showers are filtered out. For similar reasons the leading photons are
required to have \(\Delta R > 0.45\). The code of the analysis is
listed in \cref{sec:ppanalysisfull}.
each photon must not contain more than \SI{4.5}{\percent} of the
photon transverse momentum (\(+ \SI{6}{\giga\electronvolt}\)),
attempting to exclude photons stemming from hadron decay are filtered
out. The leading photons are required to have \(\Delta R > 0.45\), to
filter out colinear photons, as they likely stem from hadron
decays. In truth, the analysis already excludes such photons, but to
be compatible with experimental data, which must rely on such
criteria, they have been included. The code of the analysis is listed
in \cref{sec:ppanalysisfull}.
% TODO: refer back to this in discussion
% TODO: cite analysis template

View file

@ -1,10 +1,10 @@
\section{Calculation of the Cross Section to first Order}%
\section{Calculation of the Cross Section to Leading Order}%
\label{sec:qqggcalc}
After labeling the incoming quarks and outcoming photons, as well as
the momenta according to \cref{fig:qqggfeyn}, the feynman rules yield
the matrix elements in \cref{eq:matel}, where \(Z\) is the electric
charge of the quark and \(g\) is the QED coupling constant. The
charge number of the quark and \(g\) is the QED coupling constant. The
respective spinors and polarisation vectors are \(\us,\vs\) and
\(\pe\). Parenthesis are being used, whenever indices would clutter
the notation. The matrix element for \cref{fig:qqggfeyn2} is obtained

View file

@ -27,24 +27,35 @@ two identical photons in the final state.
\begin{figure}[ht]
\centering
\begin{subfigure}[c]{.45\textwidth}
\begin{subfigure}[t]{.49\textwidth}
\centering \plot{xs/diff_xs_zoom}
\caption[Plot of the differential cross section of the \(\qqgg\)
process.]{\label{fig:diffxs_zoom} The differential cross section as a
function of the polar angle \(\theta\) in the crucial region.}
\end{subfigure}
\begin{subfigure}[t]{.49\textwidth}
\centering \plot{xs/diff_xs}
\caption[Plot of the differential cross section of the \(\qqgg\)
process.]{\label{fig:diffxs} The differential cross section as a
function of the polar angle \(\theta\). }
function of the polar angle \(\theta\) over the full integration
interval. }
\end{subfigure}
\begin{subfigure}[c]{.45\textwidth}
\begin{subfigure}[t]{.49\textwidth}
\centering
\plot{xs/total_xs}
\caption[Plot of the total cross section of the \(\qqgg\)
process.]{\label{fig:totxs} The cross section
(\cref{eq:total-crossec}) of the process for a pseudo-rapidity
\cref{eq:total-crossec} of the process for a pseudo-rapidity
integrated over \([-\eta, \eta]\).}
\end{subfigure}
\caption{\label{fig:xsfirst} Plots of the differntial and total cross section
for \(\qqgg\).}
\end{figure}
The differential cross section \cref{eq:crossec} (see also
\cref{fig:diffxs}) is divergent for angles near zero or \(\pi\). At
\cref{fig:diffxs}) is divergent for angles near zero or \(\pi\) but
remains finite in the physical region (see \cref{fig:diffxs_zoom}). At
small scattering angles the absolute square of the momentum carried by
the virtual quark in \cref{fig:qqggfeyn} goes to zero, which in the
mass-less limit means that the virtual quark comes \emph{on-shell} and
@ -84,15 +95,16 @@ As can be seen in \cref{fig:totxs}, the cross section, integrated over
an interval of \([-\eta, \eta]\), is dominated by the linear
contributions in \cref{eq:total-crossec} and would result in an
infinity if no cut on \(\eta\) would be imposed. Choosing
\(\eta\in [-2.5,2.5]\) and \(\ecm=\SI{100}{\giga\electronvolt}\) the
\result{xs/python/eta} and \result{xs/python/ecm} the
process was MC integrated in \sherpa\ using the runcard in
\cref{sec:qqggruncard}. This runcard describes the exact same (leading
order) process as the calculated cross section.
\sherpa\ arrives at the cross section \result{xs/sherpa_xs}. Plugging
the same parameters into \cref{eq:total-crossec} gives
\result{xs/python/xs} which is within the uncertainty range of the
\sherpa\ value. This verifies the result for the total cross section.
\sherpa\ arrives at the cross section
\result{xs/python/xs_sherpa}. Plugging the same parameters into
\cref{eq:total-crossec} gives \result{xs/python/xs} which is within
the uncertainty range of the \sherpa\ value. This verifies the result
for the total cross section.
%%% Local Variables:
%%% mode: latex

View file

@ -202,3 +202,65 @@
collision data collected by the ATLAS experiment},
journal = {Journal of High Energy Physics}
}
@article{2020Virtanen:Sc,
author = {{Virtanen}, Pauli and {Gommers}, Ralf and
{Oliphant}, Travis E. and {Haberland}, Matt and
{Reddy}, Tyler and {Cournapeau}, David and
{Burovski}, Evgeni and {Peterson}, Pearu and
{Weckesser}, Warren and {Bright}, Jonathan and {van
der Walt}, St{\'e}fan J. and {Brett}, Matthew and
{Wilson}, Joshua and {Jarrod Millman}, K. and
{Mayorov}, Nikolay and {Nelson}, Andrew R.~J. and
{Jones}, Eric and {Kern}, Robert and {Larson}, Eric
and {Carey}, CJ and {Polat}, {\.I}lhan and {Feng},
Yu and {Moore}, Eric W. and {Vand erPlas}, Jake and
{Laxalde}, Denis and {Perktold}, Josef and
{Cimrman}, Robert and {Henriksen}, Ian and
{Quintero}, E.~A. and {Harris}, Charles R and
{Archibald}, Anne M. and {Ribeiro}, Ant{\^o}nio
H. and {Pedregosa}, Fabian and {van Mulbregt}, Paul
and {Contributors}, SciPy 1. 0},
title = "{SciPy 1.0: Fundamental Algorithms for Scientific
Computing in Python}",
journal = {Nature Methods},
year = "2020",
volume = {17},
pages = {261--272},
adsurl = {https://rdcu.be/b08Wh},
doi = {https://doi.org/10.1038/s41592-019-0686-2},
}
@article{behnel2011:cy,
title = {Cython: The best of both worlds},
author = {Behnel, Stefan and Bradshaw, Robert and Citro, Craig
and Dalcin, Lisandro and Seljebotn, Dag Sverre and
Smith, Kurt},
journal = {Computing in Science \& Engineering},
volume = {13},
number = {2},
pages = {31--39},
year = {2011},
publisher = {IEEE}
}
@inproceedings{lam2015:po,
author = {Lam, Siu Kwan and Pitrou, Antoine and Seibert,
Stanley},
title = {Numba: A LLVM-Based Python JIT Compiler},
year = {2015},
isbn = {9781450340052},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/2833157.2833162},
doi = {10.1145/2833157.2833162},
booktitle = {Proceedings of the Second Workshop on the LLVM
Compiler Infrastructure in HPC},
articleno = {7},
numpages = {6},
keywords = {LLVM, Python, compiler},
location = {Austin, Texas},
series = {LLVM 15}
}

Binary file not shown.

After

Width:  |  Height:  |  Size: 7.3 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 25 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 5.8 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 7.1 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 7.8 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 5.9 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 8.2 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 26 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 6.6 KiB

View file

@ -156,13 +156,24 @@ esp = 200 # GeV
#+RESULTS:
Let's save that stuff.
#+begin_src jupyter-python :exports both :results raw drawer
tex_value(η, prefix=r"\abs{\eta}\leq ", prec=1, save=("results", "eta.tex"))
tex_value(
esp, prefix=r"\ecm = ", unit=r"\giga\electronvolt", save=("results", "ecm.tex")
)
#+end_src
#+RESULTS:
: \(\ecm = \SI{200}{\giga\electronvolt}\)
Set up the integration and plot intervals.
#+begin_src jupyter-python :exports both :results raw drawer
interval_η = [-η, η]
interval = η_to_θ([-η, η])
interval_cosθ = np.cos(interval)
interval_pt = np.sort(η_to_pt([0, η], esp/2))
plot_interval = [0.1, np.pi-.1]
#+end_src
#+RESULTS:
@ -173,6 +184,22 @@ but that doen't reduce variance and would complicate things now.
#+end_note
** Analytical Integration
Let's plot a more detailed view of the xs.
#+begin_src jupyter-python :exports both :results raw drawer
plot_points = np.linspace(np.pi/2 - 0.5, np.pi/2 + 0.5, 1000)
plot_points = plot_points[plot_points > 0]
fig, ax = set_up_plot()
ax.plot(plot_points, gev_to_pb(diff_xs(plot_points, charge=charge, esp=esp)))
ax.set_xlabel(r"$\theta$")
ax.set_ylabel(r"$d\sigma/d\Omega$ [pb]")
ax.set_xlim([plot_points.min(), plot_points.max()])
save_fig(fig, "diff_xs_zoom", "xs", size=[2.5, 2.5])
#+end_src
#+RESULTS:
[[file:./.ob-jupyter/3986a139c4a6c3a27b1ef12a26b2e8f3ce473547.png]]
And now calculate the cross section in picobarn.
#+BEGIN_SRC jupyter-python :exports both :results raw file :file xs.tex
xs_gev = total_xs_eta(η, charge, esp)
@ -203,12 +230,15 @@ but that doen't reduce variance and would complicate things now.
Compared to sherpa, it's pretty close.
#+NAME: 81b5ed93-0312-45dc-beec-e2ba92e22626
#+BEGIN_SRC jupyter-python :exports both :results raw drawer
sherpa = 0.05380
xs_pb - sherpa
sherpa = np.loadtxt("../../runcards/qqgg/sherpa_xs", delimiter=",")
tex_value(
,*sherpa, unit=r"\pico\barn", prefix=r"\sigma = ", prec=6, save=("results", "xs_sherpa.tex")
)
xs_pb - sherpa[0]
#+END_SRC
#+RESULTS: 81b5ed93-0312-45dc-beec-e2ba92e22626
: -6.7112594623469635e-06
: -5.112594623490896e-07
I had to set the runcard option ~EW_SCHEME: alpha0~ to use the pure
QED coupling constant.
@ -216,21 +246,19 @@ but that doen't reduce variance and would complicate things now.
** Numerical Integration
Plot our nice distribution:
#+begin_src jupyter-python :exports both :results raw drawer
plot_points = np.linspace(*plot_interval, 1000)
plot_points = np.linspace(*np.arccos(interval_cosθ), 1000)
plot_points = plot_points[plot_points > 0]
fig, ax = set_up_plot()
ax.plot(plot_points, gev_to_pb(diff_xs(plot_points, charge=charge, esp=esp)))
ax.set_xlabel(r'$\theta$')
ax.set_ylabel(r'$d\sigma/d\Omega$ [pb]')
ax.set_xlim([plot_points.min(), plot_points.max()])
ax.axvline(interval[0], color='gray', linestyle='--')
ax.axvline(interval[1], color='gray', linestyle='--', label=rf'$|\eta|={η}$')
ax.legend()
save_fig(fig, 'diff_xs', 'xs', size=[2.5, 2.5])
#+end_src
#+RESULTS:
[[file:./.ob-jupyter/3dd905e7608b91a9d89503cb41660152f3b4b55c.png]]
[[file:./.ob-jupyter/ea9069041c3e2ccd18c7642001c20d374696498d.png]]
Define the integrand.
#+begin_src jupyter-python :exports both :results raw drawer
@ -248,7 +276,7 @@ Plot the integrand. # TODO: remove duplication
fig, ax = set_up_plot()
ax.plot(plot_points, xs_pb_int(plot_points))
ax.set_xlabel(r'$\theta$')
ax.set_ylabel(r'$2\pi\cdot d\sigma/d\theta [pb]')
ax.set_ylabel(r'$2\pi\cdot d\sigma/d\theta$ [pb]')
ax.set_xlim([plot_points.min(), plot_points.max()])
ax.axvline(interval[0], color='gray', linestyle='--')
ax.axvline(interval[1], color='gray', linestyle='--', label=rf'$|\eta|={η}$')
@ -256,7 +284,7 @@ Plot the integrand. # TODO: remove duplication
#+end_src
#+RESULTS:
[[file:./.ob-jupyter/ccb6653162c81c3f3e843225cb8d759178f497e0.png]]
[[file:./.ob-jupyter/0faa37f24e5e531a55c6679794b5ad84f98ed47b.png]]
*** Integral over θ
Intergrate σ with the mc method.
#+begin_src jupyter-python :exports both :results raw drawer
@ -326,9 +354,9 @@ Now we use =VEGAS= on the θ parametrisation and see what happens.
xs_pb_int,
interval,
num_increments=num_increments,
alpha=1.5,
increment_epsilon=0.01,
vegas_point_density=15,
alpha=2,
increment_epsilon=0.02,
vegas_point_density=20,
epsilon=.001,
acumulate=False,
)
@ -336,11 +364,11 @@ Now we use =VEGAS= on the θ parametrisation and see what happens.
#+end_src
#+RESULTS:
: VegasIntegrationResult(result=0.052773271883732584, sigma=0.0006119771217887033, N=920, increment_borders=array([0.16380276, 0.20109846, 0.24530873, 0.30103045, 0.37106657,
: 0.45713119, 0.56758289, 0.71432935, 0.91252993, 1.17768957,
: 1.55420491, 1.91978804, 2.20362647, 2.40547652, 2.55765843,
: 2.67560835, 2.76436094, 2.83801142, 2.89499662, 2.94066386,
: 2.9777899 ]), vegas_iterations=23)
: VegasIntegrationResult(result=0.053415671595191345, sigma=0.0006275962280404523, N=280, increment_borders=array([0.16380276, 0.20536326, 0.25384714, 0.31480502, 0.39193629,
: 0.48757604, 0.60550147, 0.75723929, 0.96215207, 1.23107803,
: 1.56182395, 1.89731315, 2.17107801, 2.37597223, 2.52898466,
: 2.64874341, 2.74453741, 2.82025926, 2.88209227, 2.93279579,
: 2.9777899 ]), vegas_iterations=7)
This is pretty good, although the variance reduction may be achieved
partially by accumulating the results from all runns. Here this gives
@ -365,8 +393,8 @@ This depends, of course, on the iteration count.
xs_pb_int,
interval,
num_increments=num_increments,
alpha=1.5,
increment_epsilon=0.01,
alpha=2,
increment_epsilon=0.02,
vegas_point_density=20,
epsilon=.001,
acumulate=True,
@ -374,11 +402,11 @@ This depends, of course, on the iteration count.
#+end_src
#+RESULTS:
: VegasIntegrationResult(result=0.05379385150190898, sigma=0.0003068472335040267, N=760, increment_borders=array([0.16380276, 0.20176444, 0.24723471, 0.30169898, 0.3725654 ,
: 0.46249133, 0.57260516, 0.71753884, 0.9047832 , 1.17676749,
: 1.53514921, 1.91009792, 2.19262501, 2.40344474, 2.55656491,
: 2.67302165, 2.76296776, 2.83550718, 2.89246912, 2.93998605,
: 2.9777899 ]), vegas_iterations=19)
: VegasIntegrationResult(result=0.053638937795766034, sigma=0.00041459931173526546, N=280, increment_borders=array([0.16380276, 0.20595251, 0.25473743, 0.31600983, 0.39036863,
: 0.48642444, 0.61192596, 0.76541368, 0.96544251, 1.24106758,
: 1.57551962, 1.90430764, 2.16425611, 2.36635898, 2.52157247,
: 2.64333433, 2.7431946 , 2.82557121, 2.88798018, 2.93770475,
: 2.9777899 ]), vegas_iterations=7)
Let's define some little helpers.
#+begin_src jupyter-python :exports both :tangle tangled/plot_utils.py
@ -459,7 +487,7 @@ And now we plot the integrand with the incremens.
ax.set_xlabel(r"$\theta$")
ax.set_ylabel(r"$2\pi\cdot d\sigma/d\theta$ [pb]")
ax.set_ylim([0, 0.09])
plot_points = np.linspace(*interval, 1000)
ax.plot(plot_points, xs_pb_int(plot_points), label="Distribution")
plot_increments(
@ -483,7 +511,7 @@ And now we plot the integrand with the incremens.
#+end_src
#+RESULTS:
[[file:./.ob-jupyter/d3b48be87b26058e6083e6f3a138e935436e3a87.png]]
[[file:./.ob-jupyter/1c53bfcbf349269eff9f54eb58b9639ed9e6ce21.png]]
*** Testing the Statistics
Let's battle test the statistics.
#+begin_src jupyter-python :exports both :results raw drawer
@ -981,15 +1009,15 @@ That looks somewhat fishy, but it isn't.
fig, ax = set_up_plot()
points = np.linspace(interval_pt[0], interval_pt[1] - .01, 1000)
ax.plot(points, gev_to_pb(diff_xs_p_t(points, charge, esp)))
ax.set_xlabel(r'$p_T$')
ax.set_xlabel(r'$p_\mathrm{T}$')
ax.set_xlim(interval_pt[0], interval_pt[1] + 1)
ax.set_ylim([0, gev_to_pb(diff_xs_p_t(interval_pt[1] -.01, charge, esp))])
ax.set_ylabel(r'$\frac{d\sigma}{dp_t}$ [pb]')
ax.set_ylabel(r'$\frac{\mathrm{d}\sigma}{\mathrm{d}p_\mathrm{T}}$ [pb]')
save_fig(fig, 'diff_xs_p_t', 'xs_sampling', size=[4, 2])
#+end_src
#+RESULTS:
[[file:./.ob-jupyter/29724b8c1f2b0005a05f64f999cf95d248ee0082.png]]
[[file:./.ob-jupyter/5c39a14515ced9b3f1d5d0cdd0c4fe75921ee3a7.png]]
this is strongly peaked at p_t=100GeV. (The jacobian goes like 1/x there!)
*** Sampling the η cross section

Binary file not shown.

File diff suppressed because it is too large Load diff

Binary file not shown.

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

View file

@ -3868,83 +3868,82 @@
\definecolor{currentstroke}{rgb}{0.121569,0.466667,0.705882}%
\pgfsetstrokecolor{currentstroke}%
\pgfsetdash{}{0pt}%
\pgfpathmoveto{\pgfqpoint{4.815278in}{2.714919in}}%
\pgfpathlineto{\pgfqpoint{4.795945in}{2.535840in}}%
\pgfpathlineto{\pgfqpoint{4.775505in}{2.376140in}}%
\pgfpathlineto{\pgfqpoint{4.755066in}{2.240064in}}%
\pgfpathlineto{\pgfqpoint{4.734627in}{2.122690in}}%
\pgfpathlineto{\pgfqpoint{4.710100in}{2.001464in}}%
\pgfpathlineto{\pgfqpoint{4.685573in}{1.897314in}}%
\pgfpathlineto{\pgfqpoint{4.661046in}{1.806827in}}%
\pgfpathlineto{\pgfqpoint{4.636519in}{1.727446in}}%
\pgfpathlineto{\pgfqpoint{4.611992in}{1.657212in}}%
\pgfpathlineto{\pgfqpoint{4.583377in}{1.584825in}}%
\pgfpathlineto{\pgfqpoint{4.554762in}{1.520956in}}%
\pgfpathlineto{\pgfqpoint{4.526147in}{1.464157in}}%
\pgfpathlineto{\pgfqpoint{4.493445in}{1.406455in}}%
\pgfpathlineto{\pgfqpoint{4.460742in}{1.355181in}}%
\pgfpathlineto{\pgfqpoint{4.428039in}{1.309295in}}%
\pgfpathlineto{\pgfqpoint{4.391249in}{1.263092in}}%
\pgfpathlineto{\pgfqpoint{4.354458in}{1.221736in}}%
\pgfpathlineto{\pgfqpoint{4.313580in}{1.180577in}}%
\pgfpathlineto{\pgfqpoint{4.272701in}{1.143689in}}%
\pgfpathlineto{\pgfqpoint{4.227735in}{1.107287in}}%
\pgfpathlineto{\pgfqpoint{4.178681in}{1.071800in}}%
\pgfpathlineto{\pgfqpoint{4.129627in}{1.040051in}}%
\pgfpathlineto{\pgfqpoint{4.076485in}{1.009240in}}%
\pgfpathlineto{\pgfqpoint{4.019256in}{0.979616in}}%
\pgfpathlineto{\pgfqpoint{3.957938in}{0.951367in}}%
\pgfpathlineto{\pgfqpoint{3.892533in}{0.924640in}}%
\pgfpathlineto{\pgfqpoint{3.823039in}{0.899547in}}%
\pgfpathlineto{\pgfqpoint{3.749458in}{0.876181in}}%
\pgfpathlineto{\pgfqpoint{3.667702in}{0.853570in}}%
\pgfpathlineto{\pgfqpoint{3.581857in}{0.833144in}}%
\pgfpathlineto{\pgfqpoint{3.491925in}{0.814956in}}%
\pgfpathlineto{\pgfqpoint{3.397904in}{0.799077in}}%
\pgfpathlineto{\pgfqpoint{3.299796in}{0.785604in}}%
\pgfpathlineto{\pgfqpoint{3.197600in}{0.774661in}}%
\pgfpathlineto{\pgfqpoint{3.091317in}{0.766407in}}%
\pgfpathlineto{\pgfqpoint{2.985033in}{0.761186in}}%
\pgfpathlineto{\pgfqpoint{2.874661in}{0.758872in}}%
\pgfpathlineto{\pgfqpoint{2.764290in}{0.759681in}}%
\pgfpathlineto{\pgfqpoint{2.653918in}{0.763628in}}%
\pgfpathlineto{\pgfqpoint{2.547634in}{0.770466in}}%
\pgfpathlineto{\pgfqpoint{2.441350in}{0.780414in}}%
\pgfpathlineto{\pgfqpoint{2.339155in}{0.793095in}}%
\pgfpathlineto{\pgfqpoint{2.241046in}{0.808379in}}%
\pgfpathlineto{\pgfqpoint{2.147026in}{0.826170in}}%
\pgfpathlineto{\pgfqpoint{2.057094in}{0.846406in}}%
\pgfpathlineto{\pgfqpoint{1.975337in}{0.867890in}}%
\pgfpathlineto{\pgfqpoint{1.897668in}{0.891415in}}%
\pgfpathlineto{\pgfqpoint{1.824087in}{0.916928in}}%
\pgfpathlineto{\pgfqpoint{1.754594in}{0.944377in}}%
\pgfpathlineto{\pgfqpoint{1.693276in}{0.971753in}}%
\pgfpathlineto{\pgfqpoint{1.636047in}{1.000425in}}%
\pgfpathlineto{\pgfqpoint{1.578817in}{1.032630in}}%
\pgfpathlineto{\pgfqpoint{1.525675in}{1.066265in}}%
\pgfpathlineto{\pgfqpoint{1.476621in}{1.101086in}}%
\pgfpathlineto{\pgfqpoint{1.431655in}{1.136765in}}%
\pgfpathlineto{\pgfqpoint{1.390776in}{1.172881in}}%
\pgfpathlineto{\pgfqpoint{1.349898in}{1.213127in}}%
\pgfpathlineto{\pgfqpoint{1.313107in}{1.253512in}}%
\pgfpathlineto{\pgfqpoint{1.276317in}{1.298564in}}%
\pgfpathlineto{\pgfqpoint{1.243614in}{1.343240in}}%
\pgfpathlineto{\pgfqpoint{1.210912in}{1.393079in}}%
\pgfpathlineto{\pgfqpoint{1.178209in}{1.449062in}}%
\pgfpathlineto{\pgfqpoint{1.149594in}{1.504061in}}%
\pgfpathlineto{\pgfqpoint{1.120979in}{1.565778in}}%
\pgfpathlineto{\pgfqpoint{1.092364in}{1.635561in}}%
\pgfpathlineto{\pgfqpoint{1.067837in}{1.703101in}}%
\pgfpathlineto{\pgfqpoint{1.043310in}{1.779240in}}%
\pgfpathlineto{\pgfqpoint{1.018783in}{1.865775in}}%
\pgfpathlineto{\pgfqpoint{0.994256in}{1.965039in}}%
\pgfpathlineto{\pgfqpoint{0.973817in}{2.059686in}}%
\pgfpathlineto{\pgfqpoint{0.953378in}{2.167652in}}%
\pgfpathlineto{\pgfqpoint{0.932939in}{2.292009in}}%
\pgfpathlineto{\pgfqpoint{0.912499in}{2.436854in}}%
\pgfpathlineto{\pgfqpoint{0.892060in}{2.607777in}}%
\pgfpathlineto{\pgfqpoint{0.880903in}{2.714919in}}%
\pgfpathlineto{\pgfqpoint{0.880903in}{2.714919in}}%
\pgfpathmoveto{\pgfqpoint{0.894792in}{2.583173in}}%
\pgfpathlineto{\pgfqpoint{0.914344in}{2.422807in}}%
\pgfpathlineto{\pgfqpoint{0.933897in}{2.285758in}}%
\pgfpathlineto{\pgfqpoint{0.953449in}{2.167248in}}%
\pgfpathlineto{\pgfqpoint{0.976912in}{2.044562in}}%
\pgfpathlineto{\pgfqpoint{1.000375in}{1.938929in}}%
\pgfpathlineto{\pgfqpoint{1.023838in}{1.846984in}}%
\pgfpathlineto{\pgfqpoint{1.047301in}{1.766195in}}%
\pgfpathlineto{\pgfqpoint{1.074675in}{1.683474in}}%
\pgfpathlineto{\pgfqpoint{1.102049in}{1.610943in}}%
\pgfpathlineto{\pgfqpoint{1.129422in}{1.546796in}}%
\pgfpathlineto{\pgfqpoint{1.160706in}{1.481969in}}%
\pgfpathlineto{\pgfqpoint{1.191990in}{1.424648in}}%
\pgfpathlineto{\pgfqpoint{1.223274in}{1.373574in}}%
\pgfpathlineto{\pgfqpoint{1.258469in}{1.322358in}}%
\pgfpathlineto{\pgfqpoint{1.293663in}{1.276689in}}%
\pgfpathlineto{\pgfqpoint{1.332769in}{1.231397in}}%
\pgfpathlineto{\pgfqpoint{1.371874in}{1.190935in}}%
\pgfpathlineto{\pgfqpoint{1.414889in}{1.151120in}}%
\pgfpathlineto{\pgfqpoint{1.457905in}{1.115464in}}%
\pgfpathlineto{\pgfqpoint{1.504831in}{1.080582in}}%
\pgfpathlineto{\pgfqpoint{1.555667in}{1.046804in}}%
\pgfpathlineto{\pgfqpoint{1.610415in}{1.014377in}}%
\pgfpathlineto{\pgfqpoint{1.669072in}{0.983483in}}%
\pgfpathlineto{\pgfqpoint{1.731640in}{0.954252in}}%
\pgfpathlineto{\pgfqpoint{1.798119in}{0.926776in}}%
\pgfpathlineto{\pgfqpoint{1.868508in}{0.901124in}}%
\pgfpathlineto{\pgfqpoint{1.942808in}{0.877349in}}%
\pgfpathlineto{\pgfqpoint{2.021018in}{0.855500in}}%
\pgfpathlineto{\pgfqpoint{2.107049in}{0.834754in}}%
\pgfpathlineto{\pgfqpoint{2.196991in}{0.816312in}}%
\pgfpathlineto{\pgfqpoint{2.290843in}{0.800223in}}%
\pgfpathlineto{\pgfqpoint{2.388606in}{0.786563in}}%
\pgfpathlineto{\pgfqpoint{2.490279in}{0.775437in}}%
\pgfpathlineto{\pgfqpoint{2.595863in}{0.766987in}}%
\pgfpathlineto{\pgfqpoint{2.705357in}{0.761394in}}%
\pgfpathlineto{\pgfqpoint{2.814851in}{0.758923in}}%
\pgfpathlineto{\pgfqpoint{2.924345in}{0.759526in}}%
\pgfpathlineto{\pgfqpoint{3.033839in}{0.763215in}}%
\pgfpathlineto{\pgfqpoint{3.143334in}{0.770059in}}%
\pgfpathlineto{\pgfqpoint{3.248917in}{0.779776in}}%
\pgfpathlineto{\pgfqpoint{3.350591in}{0.792202in}}%
\pgfpathlineto{\pgfqpoint{3.448353in}{0.807219in}}%
\pgfpathlineto{\pgfqpoint{3.542205in}{0.824743in}}%
\pgfpathlineto{\pgfqpoint{3.632147in}{0.844723in}}%
\pgfpathlineto{\pgfqpoint{3.714268in}{0.866044in}}%
\pgfpathlineto{\pgfqpoint{3.792478in}{0.889468in}}%
\pgfpathlineto{\pgfqpoint{3.866778in}{0.914969in}}%
\pgfpathlineto{\pgfqpoint{3.933256in}{0.940897in}}%
\pgfpathlineto{\pgfqpoint{3.995824in}{0.968425in}}%
\pgfpathlineto{\pgfqpoint{4.054482in}{0.997445in}}%
\pgfpathlineto{\pgfqpoint{4.109229in}{1.027814in}}%
\pgfpathlineto{\pgfqpoint{4.160066in}{1.059344in}}%
\pgfpathlineto{\pgfqpoint{4.210902in}{1.094649in}}%
\pgfpathlineto{\pgfqpoint{4.257828in}{1.131198in}}%
\pgfpathlineto{\pgfqpoint{4.300844in}{1.168659in}}%
\pgfpathlineto{\pgfqpoint{4.339949in}{1.206592in}}%
\pgfpathlineto{\pgfqpoint{4.379054in}{1.248887in}}%
\pgfpathlineto{\pgfqpoint{4.414249in}{1.291354in}}%
\pgfpathlineto{\pgfqpoint{4.449443in}{1.338764in}}%
\pgfpathlineto{\pgfqpoint{4.480727in}{1.385815in}}%
\pgfpathlineto{\pgfqpoint{4.512011in}{1.438349in}}%
\pgfpathlineto{\pgfqpoint{4.543295in}{1.497416in}}%
\pgfpathlineto{\pgfqpoint{4.570669in}{1.555503in}}%
\pgfpathlineto{\pgfqpoint{4.598042in}{1.620754in}}%
\pgfpathlineto{\pgfqpoint{4.625416in}{1.694619in}}%
\pgfpathlineto{\pgfqpoint{4.648879in}{1.766195in}}%
\pgfpathlineto{\pgfqpoint{4.672342in}{1.846984in}}%
\pgfpathlineto{\pgfqpoint{4.695805in}{1.938929in}}%
\pgfpathlineto{\pgfqpoint{4.719268in}{2.044562in}}%
\pgfpathlineto{\pgfqpoint{4.738821in}{2.145441in}}%
\pgfpathlineto{\pgfqpoint{4.758373in}{2.260702in}}%
\pgfpathlineto{\pgfqpoint{4.777926in}{2.393706in}}%
\pgfpathlineto{\pgfqpoint{4.797478in}{2.548951in}}%
\pgfpathlineto{\pgfqpoint{4.801389in}{2.583173in}}%
\pgfpathlineto{\pgfqpoint{4.801389in}{2.583173in}}%
\pgfusepath{stroke}%
\end{pgfscope}%
\begin{pgfscope}%
@ -3956,8 +3955,8 @@
\definecolor{currentstroke}{rgb}{0.501961,0.501961,0.501961}%
\pgfsetstrokecolor{currentstroke}%
\pgfsetdash{{3.700000pt}{1.600000pt}}{0.000000pt}%
\pgfpathmoveto{\pgfqpoint{4.749612in}{0.594444in}}%
\pgfpathlineto{\pgfqpoint{4.749612in}{2.801389in}}%
\pgfpathmoveto{\pgfqpoint{4.743691in}{0.594444in}}%
\pgfpathlineto{\pgfqpoint{4.743691in}{2.801389in}}%
\pgfusepath{stroke}%
\end{pgfscope}%
\begin{pgfscope}%
@ -3969,8 +3968,8 @@
\definecolor{currentstroke}{rgb}{0.501961,0.501961,0.501961}%
\pgfsetstrokecolor{currentstroke}%
\pgfsetdash{{3.700000pt}{1.600000pt}}{0.000000pt}%
\pgfpathmoveto{\pgfqpoint{4.749612in}{0.594444in}}%
\pgfpathlineto{\pgfqpoint{4.749612in}{2.801389in}}%
\pgfpathmoveto{\pgfqpoint{4.743691in}{0.594444in}}%
\pgfpathlineto{\pgfqpoint{4.743691in}{2.801389in}}%
\pgfusepath{stroke}%
\end{pgfscope}%
\begin{pgfscope}%
@ -3982,8 +3981,8 @@
\definecolor{currentstroke}{rgb}{0.501961,0.501961,0.501961}%
\pgfsetstrokecolor{currentstroke}%
\pgfsetdash{{3.700000pt}{1.600000pt}}{0.000000pt}%
\pgfpathmoveto{\pgfqpoint{4.688236in}{0.594444in}}%
\pgfpathlineto{\pgfqpoint{4.688236in}{2.801389in}}%
\pgfpathmoveto{\pgfqpoint{4.676382in}{0.594444in}}%
\pgfpathlineto{\pgfqpoint{4.676382in}{2.801389in}}%
\pgfusepath{stroke}%
\end{pgfscope}%
\begin{pgfscope}%
@ -3995,8 +3994,8 @@
\definecolor{currentstroke}{rgb}{0.501961,0.501961,0.501961}%
\pgfsetstrokecolor{currentstroke}%
\pgfsetdash{{3.700000pt}{1.600000pt}}{0.000000pt}%
\pgfpathmoveto{\pgfqpoint{4.610879in}{0.594444in}}%
\pgfpathlineto{\pgfqpoint{4.610879in}{2.801389in}}%
\pgfpathmoveto{\pgfqpoint{4.591756in}{0.594444in}}%
\pgfpathlineto{\pgfqpoint{4.591756in}{2.801389in}}%
\pgfusepath{stroke}%
\end{pgfscope}%
\begin{pgfscope}%
@ -4008,8 +4007,8 @@
\definecolor{currentstroke}{rgb}{0.501961,0.501961,0.501961}%
\pgfsetstrokecolor{currentstroke}%
\pgfsetdash{{3.700000pt}{1.600000pt}}{0.000000pt}%
\pgfpathmoveto{\pgfqpoint{4.513649in}{0.594444in}}%
\pgfpathlineto{\pgfqpoint{4.513649in}{2.801389in}}%
\pgfpathmoveto{\pgfqpoint{4.484676in}{0.594444in}}%
\pgfpathlineto{\pgfqpoint{4.484676in}{2.801389in}}%
\pgfusepath{stroke}%
\end{pgfscope}%
\begin{pgfscope}%
@ -4021,8 +4020,8 @@
\definecolor{currentstroke}{rgb}{0.501961,0.501961,0.501961}%
\pgfsetstrokecolor{currentstroke}%
\pgfsetdash{{3.700000pt}{1.600000pt}}{0.000000pt}%
\pgfpathmoveto{\pgfqpoint{4.394167in}{0.594444in}}%
\pgfpathlineto{\pgfqpoint{4.394167in}{2.801389in}}%
\pgfpathmoveto{\pgfqpoint{4.351901in}{0.594444in}}%
\pgfpathlineto{\pgfqpoint{4.351901in}{2.801389in}}%
\pgfusepath{stroke}%
\end{pgfscope}%
\begin{pgfscope}%
@ -4034,8 +4033,8 @@
\definecolor{currentstroke}{rgb}{0.501961,0.501961,0.501961}%
\pgfsetstrokecolor{currentstroke}%
\pgfsetdash{{3.700000pt}{1.600000pt}}{0.000000pt}%
\pgfpathmoveto{\pgfqpoint{4.240830in}{0.594444in}}%
\pgfpathlineto{\pgfqpoint{4.240830in}{2.801389in}}%
\pgfpathmoveto{\pgfqpoint{4.188188in}{0.594444in}}%
\pgfpathlineto{\pgfqpoint{4.188188in}{2.801389in}}%
\pgfusepath{stroke}%
\end{pgfscope}%
\begin{pgfscope}%
@ -4047,8 +4046,8 @@
\definecolor{currentstroke}{rgb}{0.501961,0.501961,0.501961}%
\pgfsetstrokecolor{currentstroke}%
\pgfsetdash{{3.700000pt}{1.600000pt}}{0.000000pt}%
\pgfpathmoveto{\pgfqpoint{4.037105in}{0.594444in}}%
\pgfpathlineto{\pgfqpoint{4.037105in}{2.801389in}}%
\pgfpathmoveto{\pgfqpoint{3.977534in}{0.594444in}}%
\pgfpathlineto{\pgfqpoint{3.977534in}{2.801389in}}%
\pgfusepath{stroke}%
\end{pgfscope}%
\begin{pgfscope}%
@ -4060,8 +4059,8 @@
\definecolor{currentstroke}{rgb}{0.501961,0.501961,0.501961}%
\pgfsetstrokecolor{currentstroke}%
\pgfsetdash{{3.700000pt}{1.600000pt}}{0.000000pt}%
\pgfpathmoveto{\pgfqpoint{3.761947in}{0.594444in}}%
\pgfpathlineto{\pgfqpoint{3.761947in}{2.801389in}}%
\pgfpathmoveto{\pgfqpoint{3.693058in}{0.594444in}}%
\pgfpathlineto{\pgfqpoint{3.693058in}{2.801389in}}%
\pgfusepath{stroke}%
\end{pgfscope}%
\begin{pgfscope}%
@ -4073,8 +4072,8 @@
\definecolor{currentstroke}{rgb}{0.501961,0.501961,0.501961}%
\pgfsetstrokecolor{currentstroke}%
\pgfsetdash{{3.700000pt}{1.600000pt}}{0.000000pt}%
\pgfpathmoveto{\pgfqpoint{3.393832in}{0.594444in}}%
\pgfpathlineto{\pgfqpoint{3.393832in}{2.801389in}}%
\pgfpathmoveto{\pgfqpoint{3.319714in}{0.594444in}}%
\pgfpathlineto{\pgfqpoint{3.319714in}{2.801389in}}%
\pgfusepath{stroke}%
\end{pgfscope}%
\begin{pgfscope}%
@ -4086,8 +4085,8 @@
\definecolor{currentstroke}{rgb}{0.501961,0.501961,0.501961}%
\pgfsetstrokecolor{currentstroke}%
\pgfsetdash{{3.700000pt}{1.600000pt}}{0.000000pt}%
\pgfpathmoveto{\pgfqpoint{2.871124in}{0.594444in}}%
\pgfpathlineto{\pgfqpoint{2.871124in}{2.801389in}}%
\pgfpathmoveto{\pgfqpoint{2.860546in}{0.594444in}}%
\pgfpathlineto{\pgfqpoint{2.860546in}{2.801389in}}%
\pgfusepath{stroke}%
\end{pgfscope}%
\begin{pgfscope}%
@ -4099,8 +4098,8 @@
\definecolor{currentstroke}{rgb}{0.501961,0.501961,0.501961}%
\pgfsetstrokecolor{currentstroke}%
\pgfsetdash{{3.700000pt}{1.600000pt}}{0.000000pt}%
\pgfpathmoveto{\pgfqpoint{2.363593in}{0.594444in}}%
\pgfpathlineto{\pgfqpoint{2.363593in}{2.801389in}}%
\pgfpathmoveto{\pgfqpoint{2.394794in}{0.594444in}}%
\pgfpathlineto{\pgfqpoint{2.394794in}{2.801389in}}%
\pgfusepath{stroke}%
\end{pgfscope}%
\begin{pgfscope}%
@ -4112,8 +4111,8 @@
\definecolor{currentstroke}{rgb}{0.501961,0.501961,0.501961}%
\pgfsetstrokecolor{currentstroke}%
\pgfsetdash{{3.700000pt}{1.600000pt}}{0.000000pt}%
\pgfpathmoveto{\pgfqpoint{1.969546in}{0.594444in}}%
\pgfpathlineto{\pgfqpoint{1.969546in}{2.801389in}}%
\pgfpathmoveto{\pgfqpoint{2.014732in}{0.594444in}}%
\pgfpathlineto{\pgfqpoint{2.014732in}{2.801389in}}%
\pgfusepath{stroke}%
\end{pgfscope}%
\begin{pgfscope}%
@ -4125,8 +4124,8 @@
\definecolor{currentstroke}{rgb}{0.501961,0.501961,0.501961}%
\pgfsetstrokecolor{currentstroke}%
\pgfsetdash{{3.700000pt}{1.600000pt}}{0.000000pt}%
\pgfpathmoveto{\pgfqpoint{1.689322in}{0.594444in}}%
\pgfpathlineto{\pgfqpoint{1.689322in}{2.801389in}}%
\pgfpathmoveto{\pgfqpoint{1.730282in}{0.594444in}}%
\pgfpathlineto{\pgfqpoint{1.730282in}{2.801389in}}%
\pgfusepath{stroke}%
\end{pgfscope}%
\begin{pgfscope}%
@ -4138,8 +4137,8 @@
\definecolor{currentstroke}{rgb}{0.501961,0.501961,0.501961}%
\pgfsetstrokecolor{currentstroke}%
\pgfsetdash{{3.700000pt}{1.600000pt}}{0.000000pt}%
\pgfpathmoveto{\pgfqpoint{1.478051in}{0.594444in}}%
\pgfpathlineto{\pgfqpoint{1.478051in}{2.801389in}}%
\pgfpathmoveto{\pgfqpoint{1.517858in}{0.594444in}}%
\pgfpathlineto{\pgfqpoint{1.517858in}{2.801389in}}%
\pgfusepath{stroke}%
\end{pgfscope}%
\begin{pgfscope}%
@ -4151,8 +4150,8 @@
\definecolor{currentstroke}{rgb}{0.501961,0.501961,0.501961}%
\pgfsetstrokecolor{currentstroke}%
\pgfsetdash{{3.700000pt}{1.600000pt}}{0.000000pt}%
\pgfpathmoveto{\pgfqpoint{1.314304in}{0.594444in}}%
\pgfpathlineto{\pgfqpoint{1.314304in}{2.801389in}}%
\pgfpathmoveto{\pgfqpoint{1.351600in}{0.594444in}}%
\pgfpathlineto{\pgfqpoint{1.351600in}{2.801389in}}%
\pgfusepath{stroke}%
\end{pgfscope}%
\begin{pgfscope}%
@ -4164,8 +4163,8 @@
\definecolor{currentstroke}{rgb}{0.501961,0.501961,0.501961}%
\pgfsetstrokecolor{currentstroke}%
\pgfsetdash{{3.700000pt}{1.600000pt}}{0.000000pt}%
\pgfpathmoveto{\pgfqpoint{1.191090in}{0.594444in}}%
\pgfpathlineto{\pgfqpoint{1.191090in}{2.801389in}}%
\pgfpathmoveto{\pgfqpoint{1.218611in}{0.594444in}}%
\pgfpathlineto{\pgfqpoint{1.218611in}{2.801389in}}%
\pgfusepath{stroke}%
\end{pgfscope}%
\begin{pgfscope}%
@ -4177,8 +4176,8 @@
\definecolor{currentstroke}{rgb}{0.501961,0.501961,0.501961}%
\pgfsetstrokecolor{currentstroke}%
\pgfsetdash{{3.700000pt}{1.600000pt}}{0.000000pt}%
\pgfpathmoveto{\pgfqpoint{1.088843in}{0.594444in}}%
\pgfpathlineto{\pgfqpoint{1.088843in}{2.801389in}}%
\pgfpathmoveto{\pgfqpoint{1.113488in}{0.594444in}}%
\pgfpathlineto{\pgfqpoint{1.113488in}{2.801389in}}%
\pgfusepath{stroke}%
\end{pgfscope}%
\begin{pgfscope}%
@ -4190,8 +4189,8 @@
\definecolor{currentstroke}{rgb}{0.501961,0.501961,0.501961}%
\pgfsetstrokecolor{currentstroke}%
\pgfsetdash{{3.700000pt}{1.600000pt}}{0.000000pt}%
\pgfpathmoveto{\pgfqpoint{1.009732in}{0.594444in}}%
\pgfpathlineto{\pgfqpoint{1.009732in}{2.801389in}}%
\pgfpathmoveto{\pgfqpoint{1.027647in}{0.594444in}}%
\pgfpathlineto{\pgfqpoint{1.027647in}{2.801389in}}%
\pgfusepath{stroke}%
\end{pgfscope}%
\begin{pgfscope}%
@ -4203,8 +4202,8 @@
\definecolor{currentstroke}{rgb}{0.501961,0.501961,0.501961}%
\pgfsetstrokecolor{currentstroke}%
\pgfsetdash{{3.700000pt}{1.600000pt}}{0.000000pt}%
\pgfpathmoveto{\pgfqpoint{0.946333in}{0.594444in}}%
\pgfpathlineto{\pgfqpoint{0.946333in}{2.801389in}}%
\pgfpathmoveto{\pgfqpoint{0.957256in}{0.594444in}}%
\pgfpathlineto{\pgfqpoint{0.957256in}{2.801389in}}%
\pgfusepath{stroke}%
\end{pgfscope}%
\begin{pgfscope}%
@ -4216,144 +4215,136 @@
\definecolor{currentstroke}{rgb}{1.000000,0.498039,0.054902}%
\pgfsetstrokecolor{currentstroke}%
\pgfsetdash{}{0pt}%
\pgfpathmoveto{\pgfqpoint{4.815278in}{2.714919in}}%
\pgfpathlineto{\pgfqpoint{4.804120in}{2.607777in}}%
\pgfpathlineto{\pgfqpoint{4.800032in}{2.142636in}}%
\pgfpathlineto{\pgfqpoint{4.779593in}{2.013276in}}%
\pgfpathlineto{\pgfqpoint{4.759154in}{1.903347in}}%
\pgfpathlineto{\pgfqpoint{4.750978in}{1.863856in}}%
\pgfpathlineto{\pgfqpoint{4.746891in}{2.076802in}}%
\pgfpathlineto{\pgfqpoint{4.726451in}{1.973776in}}%
\pgfpathlineto{\pgfqpoint{4.701924in}{1.866926in}}%
\pgfpathlineto{\pgfqpoint{4.689661in}{1.819209in}}%
\pgfpathlineto{\pgfqpoint{4.685573in}{2.119005in}}%
\pgfpathlineto{\pgfqpoint{4.661046in}{2.013122in}}%
\pgfpathlineto{\pgfqpoint{4.636519in}{1.920233in}}%
\pgfpathlineto{\pgfqpoint{4.611992in}{1.838049in}}%
\pgfpathlineto{\pgfqpoint{4.607904in}{2.141447in}}%
\pgfpathlineto{\pgfqpoint{4.579289in}{2.036924in}}%
\pgfpathlineto{\pgfqpoint{4.550674in}{1.944592in}}%
\pgfpathlineto{\pgfqpoint{4.522060in}{1.862395in}}%
\pgfpathlineto{\pgfqpoint{4.513884in}{1.840536in}}%
\pgfpathlineto{\pgfqpoint{4.509796in}{2.112594in}}%
\pgfpathlineto{\pgfqpoint{4.481181in}{2.026036in}}%
\pgfpathlineto{\pgfqpoint{4.448478in}{1.937213in}}%
\pgfpathlineto{\pgfqpoint{4.415776in}{1.857530in}}%
\pgfpathlineto{\pgfqpoint{4.395337in}{1.811747in}}%
\pgfpathlineto{\pgfqpoint{4.391249in}{2.145363in}}%
\pgfpathlineto{\pgfqpoint{4.358546in}{2.059599in}}%
\pgfpathlineto{\pgfqpoint{4.321756in}{1.972219in}}%
\pgfpathlineto{\pgfqpoint{4.284965in}{1.893133in}}%
\pgfpathlineto{\pgfqpoint{4.248174in}{1.821199in}}%
\pgfpathlineto{\pgfqpoint{4.244087in}{1.813605in}}%
\pgfpathlineto{\pgfqpoint{4.239999in}{2.204237in}}%
\pgfpathlineto{\pgfqpoint{4.203208in}{2.118618in}}%
\pgfpathlineto{\pgfqpoint{4.162330in}{2.031697in}}%
\pgfpathlineto{\pgfqpoint{4.121452in}{1.952347in}}%
\pgfpathlineto{\pgfqpoint{4.080573in}{1.879646in}}%
\pgfpathlineto{\pgfqpoint{4.039695in}{1.812827in}}%
\pgfpathlineto{\pgfqpoint{4.035607in}{2.231409in}}%
\pgfpathlineto{\pgfqpoint{3.994729in}{2.148904in}}%
\pgfpathlineto{\pgfqpoint{3.949762in}{2.065373in}}%
\pgfpathlineto{\pgfqpoint{3.904796in}{1.988631in}}%
\pgfpathlineto{\pgfqpoint{3.859830in}{1.917998in}}%
\pgfpathlineto{\pgfqpoint{3.814864in}{1.852898in}}%
\pgfpathlineto{\pgfqpoint{3.769898in}{1.792840in}}%
\pgfpathlineto{\pgfqpoint{3.765810in}{1.787615in}}%
\pgfpathlineto{\pgfqpoint{3.761722in}{2.183771in}}%
\pgfpathlineto{\pgfqpoint{3.716756in}{2.110681in}}%
\pgfpathlineto{\pgfqpoint{3.671789in}{2.043218in}}%
\pgfpathlineto{\pgfqpoint{3.626823in}{1.980977in}}%
\pgfpathlineto{\pgfqpoint{3.581857in}{1.923610in}}%
\pgfpathlineto{\pgfqpoint{3.536891in}{1.870816in}}%
\pgfpathlineto{\pgfqpoint{3.487837in}{1.818129in}}%
\pgfpathlineto{\pgfqpoint{3.438783in}{1.770278in}}%
\pgfpathlineto{\pgfqpoint{3.397904in}{1.733913in}}%
\pgfpathlineto{\pgfqpoint{3.393817in}{2.207518in}}%
\pgfpathlineto{\pgfqpoint{3.348850in}{2.156187in}}%
\pgfpathlineto{\pgfqpoint{3.303884in}{2.109876in}}%
\pgfpathlineto{\pgfqpoint{3.263006in}{2.071981in}}%
\pgfpathlineto{\pgfqpoint{3.222127in}{2.037969in}}%
\pgfpathlineto{\pgfqpoint{3.181249in}{2.007738in}}%
\pgfpathlineto{\pgfqpoint{3.140371in}{1.981200in}}%
\pgfpathlineto{\pgfqpoint{3.099492in}{1.958279in}}%
\pgfpathlineto{\pgfqpoint{3.058614in}{1.938911in}}%
\pgfpathlineto{\pgfqpoint{3.017736in}{1.923044in}}%
\pgfpathlineto{\pgfqpoint{2.976857in}{1.910634in}}%
\pgfpathlineto{\pgfqpoint{2.935979in}{1.901650in}}%
\pgfpathlineto{\pgfqpoint{2.895100in}{1.896066in}}%
\pgfpathlineto{\pgfqpoint{2.874661in}{1.894544in}}%
\pgfpathlineto{\pgfqpoint{2.870573in}{1.856598in}}%
\pgfpathlineto{\pgfqpoint{2.829695in}{1.856434in}}%
\pgfpathlineto{\pgfqpoint{2.788817in}{1.859554in}}%
\pgfpathlineto{\pgfqpoint{2.747938in}{1.865964in}}%
\pgfpathlineto{\pgfqpoint{2.707060in}{1.875683in}}%
\pgfpathlineto{\pgfqpoint{2.666182in}{1.888735in}}%
\pgfpathlineto{\pgfqpoint{2.625303in}{1.905157in}}%
\pgfpathlineto{\pgfqpoint{2.584425in}{1.924991in}}%
\pgfpathlineto{\pgfqpoint{2.543546in}{1.948293in}}%
\pgfpathlineto{\pgfqpoint{2.502668in}{1.975128in}}%
\pgfpathlineto{\pgfqpoint{2.461790in}{2.005573in}}%
\pgfpathlineto{\pgfqpoint{2.420911in}{2.039718in}}%
\pgfpathlineto{\pgfqpoint{2.380033in}{2.077665in}}%
\pgfpathlineto{\pgfqpoint{2.363682in}{2.093935in}}%
\pgfpathlineto{\pgfqpoint{2.359594in}{1.761883in}}%
\pgfpathlineto{\pgfqpoint{2.314628in}{1.799533in}}%
\pgfpathlineto{\pgfqpoint{2.269661in}{1.841079in}}%
\pgfpathlineto{\pgfqpoint{2.224695in}{1.886695in}}%
\pgfpathlineto{\pgfqpoint{2.179729in}{1.936581in}}%
\pgfpathlineto{\pgfqpoint{2.134763in}{1.990964in}}%
\pgfpathlineto{\pgfqpoint{2.089797in}{2.050110in}}%
\pgfpathlineto{\pgfqpoint{2.044830in}{2.114321in}}%
\pgfpathlineto{\pgfqpoint{1.999864in}{2.183946in}}%
\pgfpathlineto{\pgfqpoint{1.971249in}{2.231254in}}%
\pgfpathlineto{\pgfqpoint{1.967161in}{1.763400in}}%
\pgfpathlineto{\pgfqpoint{1.922195in}{1.820266in}}%
\pgfpathlineto{\pgfqpoint{1.877229in}{1.881878in}}%
\pgfpathlineto{\pgfqpoint{1.832263in}{1.948667in}}%
\pgfpathlineto{\pgfqpoint{1.787297in}{2.021141in}}%
\pgfpathlineto{\pgfqpoint{1.742330in}{2.099894in}}%
\pgfpathlineto{\pgfqpoint{1.701452in}{2.177526in}}%
\pgfpathlineto{\pgfqpoint{1.693276in}{2.193798in}}%
\pgfpathlineto{\pgfqpoint{1.689188in}{1.806461in}}%
\pgfpathlineto{\pgfqpoint{1.648310in}{1.871360in}}%
\pgfpathlineto{\pgfqpoint{1.607432in}{1.941817in}}%
\pgfpathlineto{\pgfqpoint{1.566553in}{2.018521in}}%
\pgfpathlineto{\pgfqpoint{1.525675in}{2.102299in}}%
\pgfpathlineto{\pgfqpoint{1.488884in}{2.184561in}}%
\pgfpathlineto{\pgfqpoint{1.480709in}{2.203812in}}%
\pgfpathlineto{\pgfqpoint{1.476621in}{1.849368in}}%
\pgfpathlineto{\pgfqpoint{1.439830in}{1.920944in}}%
\pgfpathlineto{\pgfqpoint{1.403040in}{1.999365in}}%
\pgfpathlineto{\pgfqpoint{1.366249in}{2.085674in}}%
\pgfpathlineto{\pgfqpoint{1.333547in}{2.170040in}}%
\pgfpathlineto{\pgfqpoint{1.317195in}{2.215270in}}%
\pgfpathlineto{\pgfqpoint{1.313107in}{1.822817in}}%
\pgfpathlineto{\pgfqpoint{1.280405in}{1.896979in}}%
\pgfpathlineto{\pgfqpoint{1.247702in}{1.979151in}}%
\pgfpathlineto{\pgfqpoint{1.214999in}{2.070747in}}%
\pgfpathlineto{\pgfqpoint{1.194560in}{2.133550in}}%
\pgfpathlineto{\pgfqpoint{1.190472in}{1.882552in}}%
\pgfpathlineto{\pgfqpoint{1.161858in}{1.963688in}}%
\pgfpathlineto{\pgfqpoint{1.133243in}{2.054464in}}%
\pgfpathlineto{\pgfqpoint{1.104628in}{2.156758in}}%
\pgfpathlineto{\pgfqpoint{1.092364in}{2.204698in}}%
\pgfpathlineto{\pgfqpoint{1.088276in}{1.853170in}}%
\pgfpathlineto{\pgfqpoint{1.063749in}{1.935581in}}%
\pgfpathlineto{\pgfqpoint{1.039222in}{2.028605in}}%
\pgfpathlineto{\pgfqpoint{1.014695in}{2.134484in}}%
\pgfpathlineto{\pgfqpoint{1.010608in}{2.153574in}}%
\pgfpathlineto{\pgfqpoint{1.006520in}{1.859572in}}%
\pgfpathlineto{\pgfqpoint{0.981993in}{1.961929in}}%
\pgfpathlineto{\pgfqpoint{0.961553in}{2.060050in}}%
\pgfpathlineto{\pgfqpoint{0.949290in}{2.125655in}}%
\pgfpathlineto{\pgfqpoint{0.945202in}{1.858081in}}%
\pgfpathlineto{\pgfqpoint{0.924763in}{1.960986in}}%
\pgfpathlineto{\pgfqpoint{0.904324in}{2.081443in}}%
\pgfpathlineto{\pgfqpoint{0.896148in}{2.135593in}}%
\pgfpathlineto{\pgfqpoint{0.892060in}{2.607777in}}%
\pgfpathlineto{\pgfqpoint{0.880903in}{2.714919in}}%
\pgfpathlineto{\pgfqpoint{0.880903in}{2.714919in}}%
\pgfpathmoveto{\pgfqpoint{0.894792in}{2.473547in}}%
\pgfpathlineto{\pgfqpoint{0.914344in}{2.322021in}}%
\pgfpathlineto{\pgfqpoint{0.933897in}{2.192527in}}%
\pgfpathlineto{\pgfqpoint{0.953449in}{2.080549in}}%
\pgfpathlineto{\pgfqpoint{0.957360in}{2.245905in}}%
\pgfpathlineto{\pgfqpoint{0.980823in}{2.118595in}}%
\pgfpathlineto{\pgfqpoint{1.004286in}{2.008769in}}%
\pgfpathlineto{\pgfqpoint{1.023838in}{1.928116in}}%
\pgfpathlineto{\pgfqpoint{1.027749in}{2.202445in}}%
\pgfpathlineto{\pgfqpoint{1.051212in}{2.099693in}}%
\pgfpathlineto{\pgfqpoint{1.078586in}{1.994339in}}%
\pgfpathlineto{\pgfqpoint{1.105959in}{1.901842in}}%
\pgfpathlineto{\pgfqpoint{1.109870in}{1.889539in}}%
\pgfpathlineto{\pgfqpoint{1.113780in}{2.165632in}}%
\pgfpathlineto{\pgfqpoint{1.141154in}{2.068630in}}%
\pgfpathlineto{\pgfqpoint{1.168527in}{1.981930in}}%
\pgfpathlineto{\pgfqpoint{1.199811in}{1.893432in}}%
\pgfpathlineto{\pgfqpoint{1.215453in}{1.852849in}}%
\pgfpathlineto{\pgfqpoint{1.219364in}{2.174027in}}%
\pgfpathlineto{\pgfqpoint{1.250648in}{2.080629in}}%
\pgfpathlineto{\pgfqpoint{1.281932in}{1.996399in}}%
\pgfpathlineto{\pgfqpoint{1.317127in}{1.910969in}}%
\pgfpathlineto{\pgfqpoint{1.348411in}{1.842154in}}%
\pgfpathlineto{\pgfqpoint{1.352321in}{2.144068in}}%
\pgfpathlineto{\pgfqpoint{1.387516in}{2.056843in}}%
\pgfpathlineto{\pgfqpoint{1.422710in}{1.977415in}}%
\pgfpathlineto{\pgfqpoint{1.461815in}{1.897083in}}%
\pgfpathlineto{\pgfqpoint{1.500920in}{1.823995in}}%
\pgfpathlineto{\pgfqpoint{1.516562in}{1.796573in}}%
\pgfpathlineto{\pgfqpoint{1.520473in}{2.121806in}}%
\pgfpathlineto{\pgfqpoint{1.559578in}{2.040138in}}%
\pgfpathlineto{\pgfqpoint{1.598683in}{1.965121in}}%
\pgfpathlineto{\pgfqpoint{1.641699in}{1.889391in}}%
\pgfpathlineto{\pgfqpoint{1.684714in}{1.819964in}}%
\pgfpathlineto{\pgfqpoint{1.727730in}{1.756144in}}%
\pgfpathlineto{\pgfqpoint{1.731640in}{2.142616in}}%
\pgfpathlineto{\pgfqpoint{1.774656in}{2.064457in}}%
\pgfpathlineto{\pgfqpoint{1.817671in}{1.992303in}}%
\pgfpathlineto{\pgfqpoint{1.864598in}{1.919787in}}%
\pgfpathlineto{\pgfqpoint{1.911524in}{1.853144in}}%
\pgfpathlineto{\pgfqpoint{1.958450in}{1.791855in}}%
\pgfpathlineto{\pgfqpoint{2.005376in}{1.735476in}}%
\pgfpathlineto{\pgfqpoint{2.013197in}{1.726530in}}%
\pgfpathlineto{\pgfqpoint{2.017107in}{2.101141in}}%
\pgfpathlineto{\pgfqpoint{2.064033in}{2.033313in}}%
\pgfpathlineto{\pgfqpoint{2.110960in}{1.971009in}}%
\pgfpathlineto{\pgfqpoint{2.157886in}{1.913871in}}%
\pgfpathlineto{\pgfqpoint{2.204812in}{1.861588in}}%
\pgfpathlineto{\pgfqpoint{2.251738in}{1.813892in}}%
\pgfpathlineto{\pgfqpoint{2.298664in}{1.770551in}}%
\pgfpathlineto{\pgfqpoint{2.345590in}{1.731365in}}%
\pgfpathlineto{\pgfqpoint{2.392516in}{1.696161in}}%
\pgfpathlineto{\pgfqpoint{2.396427in}{1.941179in}}%
\pgfpathlineto{\pgfqpoint{2.439442in}{1.906116in}}%
\pgfpathlineto{\pgfqpoint{2.482458in}{1.874868in}}%
\pgfpathlineto{\pgfqpoint{2.525473in}{1.847331in}}%
\pgfpathlineto{\pgfqpoint{2.568489in}{1.823418in}}%
\pgfpathlineto{\pgfqpoint{2.611505in}{1.803054in}}%
\pgfpathlineto{\pgfqpoint{2.654520in}{1.786175in}}%
\pgfpathlineto{\pgfqpoint{2.697536in}{1.772731in}}%
\pgfpathlineto{\pgfqpoint{2.740551in}{1.762682in}}%
\pgfpathlineto{\pgfqpoint{2.783567in}{1.755999in}}%
\pgfpathlineto{\pgfqpoint{2.826582in}{1.752662in}}%
\pgfpathlineto{\pgfqpoint{2.857867in}{1.752332in}}%
\pgfpathlineto{\pgfqpoint{2.861777in}{1.736042in}}%
\pgfpathlineto{\pgfqpoint{2.904793in}{1.738733in}}%
\pgfpathlineto{\pgfqpoint{2.947808in}{1.744721in}}%
\pgfpathlineto{\pgfqpoint{2.990824in}{1.754022in}}%
\pgfpathlineto{\pgfqpoint{3.033839in}{1.766665in}}%
\pgfpathlineto{\pgfqpoint{3.076855in}{1.782686in}}%
\pgfpathlineto{\pgfqpoint{3.119871in}{1.802134in}}%
\pgfpathlineto{\pgfqpoint{3.162886in}{1.825068in}}%
\pgfpathlineto{\pgfqpoint{3.205902in}{1.851559in}}%
\pgfpathlineto{\pgfqpoint{3.248917in}{1.881693in}}%
\pgfpathlineto{\pgfqpoint{3.291933in}{1.915568in}}%
\pgfpathlineto{\pgfqpoint{3.319306in}{1.939125in}}%
\pgfpathlineto{\pgfqpoint{3.323217in}{1.690632in}}%
\pgfpathlineto{\pgfqpoint{3.370143in}{1.726824in}}%
\pgfpathlineto{\pgfqpoint{3.417069in}{1.766996in}}%
\pgfpathlineto{\pgfqpoint{3.463995in}{1.811327in}}%
\pgfpathlineto{\pgfqpoint{3.510921in}{1.860028in}}%
\pgfpathlineto{\pgfqpoint{3.557847in}{1.913341in}}%
\pgfpathlineto{\pgfqpoint{3.604774in}{1.971543in}}%
\pgfpathlineto{\pgfqpoint{3.651700in}{2.034961in}}%
\pgfpathlineto{\pgfqpoint{3.690805in}{2.092062in}}%
\pgfpathlineto{\pgfqpoint{3.694715in}{1.740100in}}%
\pgfpathlineto{\pgfqpoint{3.741641in}{1.796878in}}%
\pgfpathlineto{\pgfqpoint{3.788567in}{1.858601in}}%
\pgfpathlineto{\pgfqpoint{3.835493in}{1.925719in}}%
\pgfpathlineto{\pgfqpoint{3.878509in}{1.992430in}}%
\pgfpathlineto{\pgfqpoint{3.921525in}{2.064590in}}%
\pgfpathlineto{\pgfqpoint{3.964540in}{2.142756in}}%
\pgfpathlineto{\pgfqpoint{3.976272in}{2.165200in}}%
\pgfpathlineto{\pgfqpoint{3.980182in}{1.763211in}}%
\pgfpathlineto{\pgfqpoint{4.023198in}{1.827954in}}%
\pgfpathlineto{\pgfqpoint{4.066213in}{1.898433in}}%
\pgfpathlineto{\pgfqpoint{4.105319in}{1.968092in}}%
\pgfpathlineto{\pgfqpoint{4.144424in}{2.043739in}}%
\pgfpathlineto{\pgfqpoint{4.183529in}{2.126147in}}%
\pgfpathlineto{\pgfqpoint{4.187439in}{2.134793in}}%
\pgfpathlineto{\pgfqpoint{4.191350in}{1.798331in}}%
\pgfpathlineto{\pgfqpoint{4.230455in}{1.869640in}}%
\pgfpathlineto{\pgfqpoint{4.269560in}{1.947982in}}%
\pgfpathlineto{\pgfqpoint{4.304754in}{2.025406in}}%
\pgfpathlineto{\pgfqpoint{4.339949in}{2.110388in}}%
\pgfpathlineto{\pgfqpoint{4.351680in}{2.140593in}}%
\pgfpathlineto{\pgfqpoint{4.355591in}{1.856747in}}%
\pgfpathlineto{\pgfqpoint{4.390786in}{1.936277in}}%
\pgfpathlineto{\pgfqpoint{4.422070in}{2.014383in}}%
\pgfpathlineto{\pgfqpoint{4.453354in}{2.100632in}}%
\pgfpathlineto{\pgfqpoint{4.484638in}{2.196408in}}%
\pgfpathlineto{\pgfqpoint{4.488548in}{1.896652in}}%
\pgfpathlineto{\pgfqpoint{4.519832in}{1.984215in}}%
\pgfpathlineto{\pgfqpoint{4.547206in}{2.069844in}}%
\pgfpathlineto{\pgfqpoint{4.574579in}{2.165465in}}%
\pgfpathlineto{\pgfqpoint{4.590221in}{2.225309in}}%
\pgfpathlineto{\pgfqpoint{4.594132in}{1.895679in}}%
\pgfpathlineto{\pgfqpoint{4.621506in}{1.988528in}}%
\pgfpathlineto{\pgfqpoint{4.644969in}{2.078389in}}%
\pgfpathlineto{\pgfqpoint{4.668432in}{2.179687in}}%
\pgfpathlineto{\pgfqpoint{4.676253in}{2.216388in}}%
\pgfpathlineto{\pgfqpoint{4.680163in}{1.899562in}}%
\pgfpathlineto{\pgfqpoint{4.703626in}{1.997500in}}%
\pgfpathlineto{\pgfqpoint{4.727089in}{2.110407in}}%
\pgfpathlineto{\pgfqpoint{4.742731in}{2.195812in}}%
\pgfpathlineto{\pgfqpoint{4.746642in}{1.986694in}}%
\pgfpathlineto{\pgfqpoint{4.766194in}{2.093086in}}%
\pgfpathlineto{\pgfqpoint{4.785747in}{2.216393in}}%
\pgfpathlineto{\pgfqpoint{4.801389in}{2.330148in}}%
\pgfpathlineto{\pgfqpoint{4.801389in}{2.330148in}}%
\pgfusepath{stroke}%
\end{pgfscope}%
\begin{pgfscope}%

View file

@ -1857,7 +1857,7 @@
\definecolor{textcolor}{rgb}{0.000000,0.000000,0.000000}%
\pgfsetstrokecolor{textcolor}%
\pgfsetfillcolor{textcolor}%
\pgftext[x=2.406374in,y=0.318333in,,top]{\color{textcolor}\rmfamily\fontsize{10.000000}{12.000000}\selectfont \(\displaystyle p_T\)}%
\pgftext[x=2.406374in,y=0.318333in,,top]{\color{textcolor}\rmfamily\fontsize{10.000000}{12.000000}\selectfont \(\displaystyle p_\mathrm{T}\)}%
\end{pgfscope}%
\begin{pgfscope}%
\pgfpathrectangle{\pgfqpoint{1.058958in}{0.594444in}}{\pgfqpoint{2.694832in}{1.206944in}}%
@ -3863,7 +3863,7 @@
\definecolor{textcolor}{rgb}{0.000000,0.000000,0.000000}%
\pgfsetstrokecolor{textcolor}%
\pgfsetfillcolor{textcolor}%
\pgftext[x=0.520347in,y=1.197917in,,bottom,rotate=90.000000]{\color{textcolor}\rmfamily\fontsize{10.000000}{12.000000}\selectfont \(\displaystyle \frac{d\sigma}{dp_t}\) [pb]}%
\pgftext[x=0.520347in,y=1.197917in,,bottom,rotate=90.000000]{\color{textcolor}\rmfamily\fontsize{10.000000}{12.000000}\selectfont \(\displaystyle \frac{\mathrm{d}\sigma}{\mathrm{d}p_\mathrm{T}}\) [pb]}%
\end{pgfscope}%
\begin{pgfscope}%
\pgfpathrectangle{\pgfqpoint{1.058958in}{0.594444in}}{\pgfqpoint{2.694832in}{1.206944in}}%

View file

@ -0,0 +1 @@
\(\ecm = \SI{200}{\giga\electronvolt}\)

View file

@ -0,0 +1 @@
\(\abs{\eta}\leq 2.5\)

View file

@ -1 +1 @@
\(\sigma = \SI{0.0528\pm 0.0006}{\pico\barn}\)
\(\sigma = \SI{0.0534\pm 0.0006}{\pico\barn}\)

View file

@ -1 +1 @@
\(N = 920\)
\(N = 280\)

View file

@ -1 +1 @@
\(\times23\)
\(\times7\)

View file

@ -0,0 +1 @@
\(\sigma = \SI{0.0537938\pm 0.0000025}{\pico\barn}\)