Technical aspects of McMule
In this section, we will review the very technical details of the implementation. This is meant for those readers, who wish to truly understand the nuts and bolts holding the code together. We begin by discussing the phasespace generation and potential pitfalls in Section Phasespace generation. Next, in Section Implementation of FKS schemes, we discuss how the FKS scheme [8, 23, 25, 28, 29]. This is meant for those readers, who wish to truly understand the nuts and bolts holding the code together. We begin by discussing the phasespace generation and potential pitfalls in Section Phasespace generation. Next, in Section Implementation of FKS schemes, we discuss how the FKS scheme [8, 23, 25, 28, 29] (cf. Appendix The FKS^2 scheme for a review) is implemented in Fortran code. This is followed by a brief review of the random number generator used in McMule in Section Random number generation. Finally, we give an account of how the statefiles work and how they are used to store distributions in Section Differential distributions and intermediary state files.
Particle string framework
Note
This section needs to be completed, link to issue
Phasespace generation
We use the vegas
algorithm for numerical integration [15].
As vegas
only works on the hypercube, we need a routine that maps \([0,1]^{3n4}\) to the momenta of an \(n\)particle final state, including the corresponding Jacobian.
The simplest way to do this uses iterative twoparticle phasespaces and boosting the generated momenta all back into the frame under consideration.
An example of how this is done is shown in Listing 18.
! use a random number to decide how much energy should
! go into the first particle
minv3 = ra(1)*energy
! use two random numbers to generate the momenta of
! particles 1 and the remainder in the CMS frame
call pair_dec(ra(2:3),energy,q2,m2,qq3,minv3)
! adjust the Jacobian
weight = minv3*energy/pi
weight = weight*0.125*sq_lambda(energy**2,m2,minv3)/energy**2/pi
! use a random number to decide how much energy should
! go into the second particle
minv4 = ra(4)*energy
! use two random numbers to generate the momenta of
! particles 2 and the remainder in their rest frame
call pair_dec(ra(5:6),minv3,q3,m3,qq4,minv4)
! adjust the Jacobian
weight = weight*minv4*energy/pi
weight = weight*0.125*sq_lambda(minv3**2,m3,minv4)/minv3**2/pi
! repeat this process until all particles are generated
! boost all generated particles back into the CMS frame
q4 = boost_back(qq4, q4)
q5 = boost_back(qq4, q5)
q3 = boost_back(qq3, q3)
q4 = boost_back(qq3, q4)
q5 = boost_back(qq3, q5)
As soon as we start using FKS, we cannot use this simplistic approach any longer. The \(c\)distributions of FKS require the photon energies \(\xi_i\) to be variables of the integration. We can fix this by first generating the photon explicitly as
where \(\vec e_\perp\) is a \((d2)\) dimensional unit vector and the ranges of \(y_1\) (the cosine of the angle) and \(\xi_1\) (the scaled energy) are \(1\le y_1 \le 1\) and \(0\le\xi_1 \le\xi_{\text{max}}\), respectively. The upper bound \(\xi_{\text{max}}\) depends on the masses of the outgoing particles. Following [28] we find
Finally, the remaining particles are generated iteratively again. This can always be done and is guaranteed to work.
For processes with one or more PCSs this approach is suboptimal.
The numerical integration can be improved by orders of magnitude by aligning the pseudosingular contribution to one of the variables of the integration, as this allows vegas
to optimise the integration procedure accordingly.
As an example, consider once again \(\mu\to\nu\bar\nu e\gamma\).
The PCS comes from
where \(y\) is the angle between photon (\(k\)) and electron (\(q\)). For large velocities \(\beta\) (or equivalently small masses), this becomes almost singular as \(y\to1\). If now \(y\) is a variable of the integration this can be mediated. An example implementation is shown in Listing 19.
xi5 = ra(1)
y2 = 2*ra(2)  1.
! generate electron q2 and photon q5 s.t. that the
! photon goes into z diractions
eme = energy*ra(3)
pme = sqrt(eme**2m2**2)
q2 = (/ 0., pme*sqrt(1.  y2**2), pme*y2, eme /)
q5 = (/ 0., 0. , 1. , 1. /)
q5 = 0.5*energy*xi5*q5
! generate euler angles and rotate all momenta
euler_mat = get_euler_mat(ra(4:6))
q2 = matmul(euler_mat,q2)
q5 = matmul(euler_mat,q5)
qq34 = q1q2q5
minv34 = sqrt(sq(qq34))
! The event weight, note that a factor xi5**2 has been ommited
weight = energy**3*pme/(4.*(2.*pi)**4)
! generate remaining neutrino momenta
call pair_dec(ra(7:8),minv34,q3,m3,q4,m4,enough_energy)
weight = weight*0.125*sq_lambda(minv34**2,m3,m4)/minv34**2/pi
q3 = boost_back(qq34, q3)
q4 = boost_back(qq34, q4)
The approach outlined above is very easy to do in the case of the muon decay as the neutrinos can absorb any timelike fourmomentum.
This is because the \(\delta\) function of the phasespace was solved through the neutrino’s pair_dec
.
However, for scattering processes where all final state leptons could be measured, this fails.
Writing a routine for \(\mu\)\(e\)scattering
that optimises on the incoming electron is rather trivial because its direction stays fixed s.t. the photon just needs to be generated according to (4). The outgoing electron \(p_3\) is more complicated. Writing the \(p_4\)phasespace four instead of threedimensional
we can solve the fourdimensional \(\delta\) function for \(p_4\) and proceed for the generation \(p_3\) and \(p_5\) almost as for the muon decay above. Doing this we obtain for the final \(\delta\) function
When solving this for \(E_3\), we need to take care to avoid extraneous solutions of this radical equation [11]. We have now obtained our phasespace parametrisation, albeit with one caveat: for anticollinear photons, i.e. \(1<y<0\) with energies
there are still two solutions.
One of these corresponds to very lowenergy electron that are almost produced at rest.
This is rather fortunate as most experiments will have an electron detection threshold higher that this.
Otherwise, phasespaces optimised this way also define a which_piece
for this corner region.
There is one last subtlety when it comes to these type of phasespace optimisations. Optimising the phasespace for emission from one leg often has adverse effects on terms with dominant emission from another leg. In other words, the numerical integration works best if there is only one PCS on which the phasespace is tuned. As most processes have more than one PCS we need to resort to something that was already discussed in the original FKS paper [29]. Scattering processes that involve multiple massless particles have overlapping singular regions. The FKS scheme now mandates that the phasespace is partitioned in such a way as to isolate at most one singularity per region with each region having its own phasespace parametrisation. Similarly we have to split the phasespace to contain at most one PCS as well as the soft singularity. In McMule \(\mu\)\(e\) scattering for instance is split as follows [1]
with \(s_{ij} = 2p_i\cdot p_j\) as usual. The integrand of the first \(\theta\) function has a finalstate PCS and hence we use the parametrisation obtained by solving (5). The second \(\theta\) function, on the other hand, has an initialstate PCS which can be treated by just directly parametrising the photon in the centreofmass frame as per (4). This automatically makes \(s_{15}\propto(1\beta_{\text{in}}y_1)\) a variable of the integration.
For the doublereal corrections of \(\mu\)\(e\) scattering, we proceed along the same lines except now the argument of the \(\delta\) function is more complicated.
Implementation of FKS schemes
Now that we have a phasespace routine that has \(\xi_i\) as variables of the integration, we can start implementing the relevant \(c\)distributions (11)
We refer to the first term as the event and the second as the counterevent.
Note that, due to the presence of \(\delta(\xi_1)\) in the counterevent (that is implemented through the eikonal factor \(\mathcal{E}\)) the momenta generated by the phasespace \(\mathrm{d}\Upsilon_1\mathrm{d}\Phi_{n,1}\) are different. Thus, it is possible that the momenta of the event pass the cuts or onshell conditions, while those of the counter event fail, or vice versa. This subtlety is extremely important to properly implement the FKS scheme and many problems fundamentally trace back to this.
Finally, we should note that, in order to increase numerical stability, we introduce cuts on \(\xi\) and sometimes also on a parameter that encodes the PCS such as \(y={\tt y2}\) in (4) and Listing 19. Events that have values of \(\xi\) smaller than this soft cut are discarded immediately and no subtraction is considered. The dependence on this slicing parameter is not expected to drop out completely and hence, the soft cut has to be chosen small enough to not influence the result.
An example implementation can be found in Listing 20.
FUNCTION SIGMA_1(x, wgt, ndim)
! The first random number x(1) is xi.
arr = x
! Generate momenta for the event using the function pointer ps
call gen_mom_fks(ps, x, masses(1:nparticle), vecs, weight)
! Whether unphysical or not, take the value of xi
xifix = xiout
! Check if the event is physical ...
if(weight > zero ) then
! and whether is passes the cuts
var = quant(vecs(:,1), vecs(:,2), vecs(:,3), vecs(:,4), ...)
cuts = any(pass_cut)
if(cuts) then
! Calculate the xi**2 * M_{n+1}^0 using the pointer matel
mat = matel(vecs(:,1), vecs(:,2), vecs(:,3), vecs(:,4), ...)
mat = xifix*weight*mat
sigma_1 = mat
end if
end if
! Check whether soft subtraction is required
if(xifix < xicut1) then
! Implement the delta function and regenerate events
arr(1) = 0._prec
call gen_mom_fks(ps, arr, masses(1:nparticle), vecs, weight)
! Check whether to include the counter event
if(weight > zero) then
var = quant(vecs(:,1), vecs(:,2), vecs(:,3), vecs(:,4), ...)
cuts = any(pass_cut)
if(cuts) then
mat = matel_s(vecs(:,1), vecs(:,2), vecs(:,3), vecs(:,4), ...)
mat = weight*mat/xifix
sigma_1 = sigma_1  mat
endif
endif
endif
END FUNCTION SIGMA_1
Calling procedures and function pointers
McMule uses function pointers to keep track of which functions to call for the integrand, phasespace routine, and matrix element(s).
These pointers are assigned during init_piece()
and then called throughout integrands
and phase_space
.
The pointers for the phasespace generator and integrand are just assigned using the =>
operator, i.e.
ps => psx2 ; fxn => sigma_0
The relevant abstract interface for the integrand fxn
is
abstract interface
function integrand(x,wgt,ndim)
import prec
integer :: ndim
real(kind=prec) :: x(ndim),wgt
real(kind=prec) :: integrand
end function integrand
end interface
Doing the same for the matrix elements is not possible as they do not have a consistent interface.
Instead, we are using a C function set_func
that is implemented in a separate file to assign the functions, ignoring the interface
call set_func('00000000', pm2enngav)
call set_func('00000001', pm2ennav)
call set_func('11111111', m2enn_part)
The first argument corresponds to the type of functions that is being set.
Bitmask 
Name 
Description 



hard matrix element 


reduced matrix element 


doubly reduced matrix element 


particle string function 


single soft limit 


hardsoft limit 


softhard limit 


double soft limit 
If the soft limits are not assigned, they are autogenerated using the partfunc
.
Optional parameters for integrands
The integration is configured during the initpiece()
routine.
Additionally to identifying what is to be integrated (cf. Section Calling procedures and function pointers), one also configures other parameters such as the dimensionality or the masses involved.
Variable 
Type 
Description 
Required 



the number of total particles (initial & final) 
yes 


the dimensionality of the phase space.
usually this is \(3n_f4\), for calculations with
extra integrations, these are included

yes 


the masses of all particles 
yes 


the value of \(\xi_c\) for the first subtraction 
for real corrections 


the value of \(\xi_c\) for the second subtraction 
for doublereal corrections 


the value of \(\xi_c\) for the first eikonal 
for virtual or realvirtual corrections 


the value of \(\xi_c\) for the second eikonal 
for doublevirtual corrections 


the number of polarised particles 
no, defaults to 0 


the symmetry factor for indistinguishable final states 
no, defaults to 1 


the soft cut parameter 
no, but recommended, defaults to 0 


the collinear cut parameter 
no, but recommended, defaults to 0 


the NTS switching point 
only for NTS matrix elemnts 
\(\xi_c\) parameters
For the \(\xi_c\) parameters, the user enters a value between zero (exclusive) and one (inclusive).
However, the FKS procedure requires the bounds of (12) and the parameters hence need to be rescaled accordingly.
In principle the user may enter two different values (xinormcut = xinormcut1
and xinormcut2
) though this is rarely called for.
Soft and collinear cut parameter
To improve numerical stability, we set events that have a value of \(\xi\) (\(y\)) lower than softcut
(collcut
) to zero.
Warning
This introduces a systematic error that needs to be studied. For small values, the improvement in stability is generally worth a small error that is anyway drowned out by the statistical error
This means that we are changing the integration (6)
and similarly with collcut
.
We have found that values of softcut = 1e10
and collcut = 1.e11
give reliable results.
Random number generation
A Monte Carlo integrator relies on a (pseudo) random number generator (RNG or PRNG) to work. The pseudorandom numbers need to be of high enough quality, i.e. have no discernible pattern and a long period, to consider each point of the integration independent but the RNG needs to be simple enough to be called many billion times without being a significant source of runtime. RNGs used in Monte Carlo applications are generally poor in quality and often predictable s.t. they could not be used for cryptographic applications.
A commonly used tradeoff between unpredictability and simplicity, both in speed and implementation, is the ParkMiller RNG, also known as minstd
[19].
As a linear congruential generator, its \((k+1)\)th output \(x_{k+1}\) can be found as
where \(m\) is a large, preferably prime, number and \(2<a<m1\) an integer. The initial value \(z_1\) is called the random seed and is chosen integer between 1 and \(m1\). It can easily be seen that any such RNG has a fixed period [2] \(p<m\) s.t. \(z_{k+p} = z_k\) because any \(z_{k+1}\) only depends on \(z_k\) and there are finitely many possible \(z_k\). We call the RNG attached to \((m,a)\) to be of full period if \(p=m1\), i.e. all integers between 1 and \(m1\) appear in the sequence \(z_k\).
Assuming \(z_1=1\) then the existence of \(p\) s.t. \(z_{p+1}=1\) is guaranteed by Fermat’s Theorem [3]. This means that the RNG is of full period iff \(a\) is a primitive root modulo \(m\), i.e.
Park and Miller suggest to use the Mersenne prime \(m=2^{31}1\), noting that there are 534,600,000 primitive roots of which 7 is the smallest. Because \(7^b\ \text{mod}\ m\) is also a primitive root as long as \(b\) is coprime to \((m\!\!1)\), [19] settled on \(b=5\), i.e. \(a=16807\) as a good choice for the multiplier that, per construction, has full period and passes certain tests of randomness.
The points generated by any such RNG will fall into \(\sqrt[n]{n!\cdot m}\) hyperplanes if scattered in an \(n\) dimensional space [16]. However, for bad choices of the multiplier \(a\) the number of planes can be a lot smaller [4].
Presently, the period length of \(p=m1=2^{31}2\) is believed to be sufficient though detailed studies quantifying this would be welcome.
Differential distributions and intermediary state files
Distributions are always calculated as histograms by binning each event according to its value for the observable \(S\).
This is done by having an \((n_b\times n_q)\)dimensional array [5] quant()
where \(n_q\) is the number of histograms to be calculated (nr_q
) and \(n_b\) is the number of bins used (nr_bins
).
The weight of each event \(\mathrm{d}\Phi\times\mathcal{M} \times w\) is added to the correct entry in bit_it
where \(w={\tt wgt}\) is the event weight assigned by vegas
.
After each iteration of vegas
we add quant()
(\({\tt quant}^2\)) to an accumulator of the same dimensions called quantsum
(quantsumsq
).
After \(i\) iterations, we can calculate the value and error as
where \(\Delta\) is the binsize.
Related to this discussion is the concept of intermediary state files.
Their purpose is to record the complete state of the integrator after every iteration in order to recover should the program crash – or more likely be interrupted by a batch system.
McMule uses a custom file format .vegas
for this purpose which uses Fortran’s recordbased (instead of stream or bytebased) format.
This means that each entry starts with 32bit unsigned integer, i.e. 4 byte, indicating the record’s size and ends with the same 32bit integer.
As this is automatically done for each record, it minimises the amount of metadata that have to be written.
The current version (v3
) must begin with the magic header and version selfidentification shown in Table 6.
The latter includes file version information and the first five characters the source tree’s SHA1 hash, obtained using make hash
.
The header is followed by records describing the state of the integrator as shown in Table 7. Additionally to information required to continue integration such as the current value and grid information, this file also has 300 bytes for a message. This is usually set by the routine to store information on the fate of the integration such as whether it was sofar uninterrupted or whether there is reason to believe it to be inconsistent.
The latter point is particularly important. While McMule cannot read intermediary files from a different version of the file format, it will continue any integration for which it can read the state file. This also includes cases where the source tree has been changed. In this case McMule prints a warning but continues the integration deriving potentially inconsistent results.
offset 
00 
01 
02 
03 
04 
05 
06 
07 
08 
09 
0A 
0B 
0C 
0D 
0E 
0F 
hex 
09 
00 
00 
00 
20 
4D 
63 
4D 
75 
6C 
65 
20 
20 
09 
00 
00 
ASCII 
\t 
‘ ‘ 
M 
c 
M 
u 
l 
e 
‘ ‘ 
‘ ‘ 
\t 

offset 
10 
11 
12 
13 
14 
15 
16 
17 
18 
19 
1A 
1B 
1C 
1D 
1E 
1F 
hex 
00 
0A 
00 
00 
00 
76 
xx 
xx 
20 
20 
20 
20 
20 
20 
20 
0A 
ASCII 
\n 
v 
\(v_1\) 
\(v_2\) 
‘ ‘ 
‘ ‘ 
‘ ‘ 
‘ ‘ 
‘ ‘ 
‘ ‘ 
‘ ‘ 
\n 

offset 
20 
21 
22 
23 
24 
25 
26 
27 
28 
29 
2A 
2B 
2C 
2D 
2E 
2F 
hex 
00 
00 
00 
05 
00 
00 
00 
xx 
xx 
xx 
xx 
xx 
05 
00 
00 
00 
ASCII 
\(s_1\) 
\(s_2\) 
\(s_3\) 
\(s_4\) 
\(s_5\) 
Off 
Len 
Type 
Var. 
Comment 





the current iteration 




subdiv. on an axis 




\(\sigma/(\delta\sigma)^2\) 




\(1/(\delta\sigma)^2\) 




\((1{\tt it})\chi + \sigma^2/(\delta\sigma)^2\) 




the integration grid 




the current random number seed 



\(n_q\) \(n_b\) \(n_s\) 
number of histograms number of bins len. histogram name 

\(10n_q+8\) \(n_sn_q+8\) \(10n_q(n_b+2)+8\) 


lower bounds upper bounds names of \(S\) accu. histograms accu. histograms squared 




current runtime in seconds 




any message 


Basics of containerisation
McMule is Dockercompatible.
Production runs should be performed with Docker [17], or its userspace complement udocker [12], to facilitate reproducibility and data retention.
On Linux, Docker uses chroot
to simulate an operating system with McMule installed.
In our case, the underlying system is Alpine Linux, a Linux distribution that is approximately 5MB in size.
Terminology
To understand Docker, we need to introduce some terms
An image is a representation of the system’s ’hard disk’. One host system can have multiple images. In (u)Docker, the images can be listed with
docker image ls
(udocker images
).Images can be have names, called tags, otherwise Docker assigns a name as the SHA256 hash.
Because keeping multiple full file systems is rather wasteful, images are split into layers that can be shared among images. In uDocker, these are tar files containing the changes made to the file system.
To execute an image, a container needs to be generated. Essentially, this involves uncompressing all layers into a directory an
chroot
ing into said directory.
It is important to note, that containers are ephemeral, i.e. changes made to the container are not stored unless explicitly requested. This is usually not required anyway.
For external interfacing, folders of the host system are mounted into the container.
Building images
Docker images are built using Dockerfiles, a set of instruction on how to create the image from external information and a base image. To speed up building of the image, McMule uses a custom base image called mcmulepre
that is constructed as follows
FROM alpine:3.11
LABEL maintainer="yannick.ulrich@psi.ch"
LABEL version="1.0"
LABEL description="The base image for the full McMule suite"
# Install a bunch of things
RUN apk add py3numpy py3scipy ipython py3pip git tar gfortran gcc make curl musldev
RUN echo "http://dl8.alpinelinux.org/alpine/edge/community" >> /etc/apk/repositories && \
apk add py3matplotlib && \
sed i '$ d' /etc/apk/repositories
On top of this, McMule is build
FROM yulrich/mcmulepre:1.0.0
LABEL maintainer="yannick.ulrich@psi.ch"
LABEL version="1.0"
LABEL description="The full McMule suite"
RUN pip3 install git+gitlab.com/muletools/pymule.git
COPY . /montecarlo
WORKDIR /montecarlo
RUN ./configure
RUN make
To build this image, run
mcmule$ docker build t $mytagname . # Using Docker
mcmule$ udocker build t=$mytagname . # Using udocker
The CI system uses udocker to perform builds after each push. Note that using udocker for building requires a patched version of the code that is available from the McMule collaboration.
Creating containers and running
In Docker, containers are usually created and run in one command
$ docker run rm $imagename $cmd
The flag –rm
makes sure the container is deleted after it is completed.
If the command is a shell (usually ash
), the flag i
also needs to be provided.
For udocker, creation and running can be done in two steps
$ udocker create $imagename
# this prints the container id
$ udocker run $containerid $cmd
# work in container
$ udocker rm $containerid
or in one step
$ udocker run rm $imagename $cmd
Running containers can be listed with udocker ps
and docker ps
.
For further details, the reader is pointed to the manuals of Docker and udocker.