Getting started
McMule is written in Fortran 95 with helper and analysis tools written in python
.
This guide will help you to get started using McMule by describing in detail how to calculate the NLO corrections to \(\tau\to[\nu\bar\nu] e\gamma\).
Since the neutrinos are not detected, we average over them, indicated by the brackets.
Hence, we have to be fully inclusive w.r.t. the neutrinos.
Still, the code allows to make any cut on the other finalstate particles.
As we will see, the BR for this process, as measured by BaBar [14, 18] has a discrepancy of more than \(3\,\sigma\) from the SM value.
This will illustrate the importance of fully differential NLO corrections in QED.
Obtaining the code
McMule is distributed multiple ways
as a precompiled executable for recentish GNU Linux distributions. To be precise, your version of glibc needs to be newer than 2.17. The currently supported versions of most popular distributions (CentOS, Debian, Ubuntu, Fedora, RHEL) should be fine.
as a Docker image that can be used on any platform.
as the source code on Gitlab that can be compiled by the user. This contains the current release in the default
release
branch as well as the developer preview (devel
).
Here we will focus on the first method as it is by far the easiest. For developers and tinkerers, we refer to Section Manual compilation on how to compile the code yourself.
First, obtain the McMule distribution from our website and extract the tarball
$ tar xzvf mcmulerelease.tar.gz
mcmulerelease/mcmule
mcmulerelease/mcmule.mod
That’s it. You can, if you want, install McMule to use it from any directory with the following commands but this is not required
$ cp mcmulerelease/mcmule /usr/local/bin
$ cp mcmulerelease/mcmule.mod /usr/local/include
To make use of McMule results, we also require the pymule python package
It can be used from the tools/
folder of the McMule repository but it is recommended that the user installs it
$ pip3 install git+https://gitlab.com/muletools/pymule.git
Simple runs at LO
Setting McMule up
In this example we want to compute two distributions, the invariant mass of the \(e\gamma\) pair, \(m_{\gamma e}\equiv \sqrt{(p_e+p_\gamma)^2}\), and the energy of the electron, \(E_e\), in the rest frame of the tau. To avoid an IR singularity in the BR, we have to require a minimum energy of the photon. We choose this to be \(E_\gamma \ge 10\,{\rm MeV}\) as used in [14, 18].
At first, we need to find out how the process \(\tau\to\nu\bar\nu e\gamma\) is implemented in McMule.
For this, we refer to the table in Section Available processes and which_piece that specifies the pieces (sometimes called which_piece
) that is required for a generic process.
The generic process is a prototype for the physical process such as \(\ell \to\nu\bar\nu \ell' \gamma\) where the flavour of the lepton \(\ell\) is left open.
In our case, we need to consider the row for \(\mu\to\nu\bar\nu e\gamma\).
Since we are only interested in LO, the only which_piece
we need is m2enng0
.
To change from the generic process \(\mu\to\nu\bar\nu e\gamma\) to the process we are actually interested in, \(\tau\to\nu\bar\nu e\gamma\), we pick the flavour
taue
which refers to a \(\tau \to e \cdots\) transition.
Other options would be taumu
for \(\tau\to\mu \cdots\) or mue
for \(\mu\to e\cdots\).
Next, we need to find out which particle ordering is used in McMule for this piece, i.e. which variable will contain eg. the electron momentum.
This is called the particle identification or PID.
We can refer to the table in Section Particle ID to find that for the which_piece
m2enng0
, we have
We can now implement our observables.
For this, we need to define a user.f95
file.
An empty template can be found on our website.
We can use this file to implement the measurement function we want to calculate, i.e. which distributions and cuts we want to apply.
We can further add some code that will execute at the beginning of the Monte Carlo run (allowing us eg. to further configure our calculation) and for each event (to simulate beam spread).
We begin by specifying the metadata of our histograms: we want two histograms (nr_q = 2
) with 90 bins each (nr_bins = 90
).
The ranges should be \(0 < m_{\gamma e} < 1800\,{\rm MeV}\) and \(0 \le E_e \le 900\,{\rm MeV}\).
10 integer, parameter :: nrq = 2
11 integer, parameter :: nrbins = 90
12 real(kind=prec) :: &
13 min_val(nrq) = (/ 0., 0. /)
14 real(kind=prec) :: &
15 max_val(nrq) = (/ 1800., 900. /)
Note
Finding suitable values for the ranges can be tricky beyond LO and usually requires a few test runs. Since all histograms have the same number of bins, one is often forced to have empty bins to ensure ‘nice’ bin widths.
We can now define the actual measurement function called quant()
.
We need to
calculate the invariant mass of the \(e\gamma\) pair. This is done using the momentumsquaring function
sq()
. The result is store in the first distribution,quant(1)
.store the electron energy in
quant(2)
. Since this is framedependent, we need to know that McMule generates the particles in the tau rest frame. However, in general it is better to boost into that frame. Further, McMule stores momenta as(/px,py,pz,E/)
, meaning the energy isq2(4)
.cut on the photon energy
q5(4)
. The variablepass_cut
controls the cuts. Initially it is set to.true.
, to indicate that the event is kept. Applying a cut amounts to setting pass_cut to.false.
.
63 FUNCTION QUANT(q1,q2,q3,q4,q5,q6,q7)
64
65 real (kind=prec), intent(in) :: q1(4),q2(4),q3(4),q4(4), q5(4),q6(4),q7(4)
66 real (kind=prec) :: quant(nr_q)
67 !! ==== keep the line below in any case ==== !!
68 call fix_mu
69
70 pass_cut = .true.
71 if(q5(4) < 10._prec) pass_cut = .false.
72
73 names(1) = 'minv'
74 quant(1) = sqrt(sq(q2+q5))
75
76 names(2) = 'Ee'
77 quant(2) = q2(4)
78
79 END FUNCTION QUANT
Additionally to the numeric value in quant(i)
we store a humanreadable name in names
.
Warning
The maximal length of these names is defined in the variable namesLen
which defaults to 6 characters.
Also note that this measurement function is not IRsafe!
Finally, we need to specify the masses of the particles involved.
This is called the flavour
in McMule.
We do this by defining a routine called inituser()
which calls the McMule function initflavour()
.
Additionally, we also use this for some selfdocumentation
55 SUBROUTINE INITUSER
56 print*, "Calculating tau>e nu nu gamma at LO"
57 call initflavour("taue")
58
59 ! Let the tau be unpolarised
60 pol1 = (/ 0._prec, 0._prec, 0._prec, 0._prec /)
61 END SUBROUTINE
This sets the flavour
variable is to taue
to change from the generic process \(\mu\to\nu\bar\nu e\gamma\) to the process we are actually interested in, \(\tau\to\nu\bar\nu e\gamma\).
We now need to compile our observable into a shared library so that McMule can load it. To do this, we run
$ gfortran fPIC shared o user.so user.f95
This requires the mcmule.mod
file to either be in the current directory or installed somewhere the compiler can find it.
Otherwise, one needs to add the corresponding flag
$ gfortran I/path/to/the/folder/of/mcmule.mod/ fPIC shared o user.so user.f95
Warning
A version of the mcmule.mod
header file is distributed as part of McMule.
If you are using a copy of GFortran prior to version 8, this means you will have to regenerate the header file manually.
To do this, you can use the buildheader.sh
script.
Running McMule manually
Now the mule is ready to trot.
For quick and dirty runs of McMule, the easiest way is to just execute the mcmule
binary in the same directory as the user.so
file and input the configuration by hand.
However, since this is not how the code is meant to be used, it will not prompt the user but just expect the correct input.
We now need to choose the statistics we want. For this example, we pick 10 iterations with \(1000\times 10^3\) points each for preconditioning and 50 iterations with \(1000\times 10^3\) points each for the actual numerical evaluation (cf. Section sec_stat for some heuristics to determine the statistics needed).
We pick a random seed between 0 and \(2^{31}1\) (cf. Section Random number generation), say 70998, and for the input variable which_piece
we enter m2enng0
as discussed above.
This system is used for other processes as well.
The input variable which_piece
determines the generic process and the part of it that is to be computed (i.e. tree level, real, double virtual etc.).
This means that we need to input the following (the specifications for the input can be found in Table 1):
Warning
When running mcmule
outside the normal repository, you need to make sure that an out/
folder exists.
$ gfortran fPIC shared o user.so user.f95
$ ./mcmule
1000
10
10000
50
70998
1.0
1.0
m2enng0
*******************************************
* C O L L I E R *
* *
* Complex OneLoop Library *
* In Extended Regularizations *
* *
* by A.Denner, S.Dittmaier, L.Hofer *
* *
* version 1.2.3 *
* *
*******************************************
 *  *  *  *  *  *  * 
Version information
Full SHA: 3342511
Git SHA: 1fbc291
Git branch: HEAD
 *  *  *  *  *  *  * 
Calculating tau>e nu nu gamma at LO
 *  *  *  *  *  *  * 
internal avgi, sd: 31902651645147.434 3242845143300.6875
internal avgi, sd: 36962119569527.797 1491060763146.8340
internal avgi, sd: 39908483081760.562 701506532475.22485
internal avgi, sd: 41908326436302.352 183707731215.21738
internal avgi, sd: 41771416194096.336 55441877946.459091
internal avgi, sd: 41871492562379.680 27645368422.638184
internal avgi, sd: 41870973597620.547 21172712863.774796
internal avgi, sd: 41881968277900.094 17287639806.820400
internal avgi, sd: 41894819976244.469 15148087824.181145
internal avgi, sd: 41892443511666.180 13860145189.905710
internal avgi, sd: 41883909931737.320 9081654737.2369480
internal avgi, sd: 41891877400107.203 5996399688.5281315
internal avgi, sd: 41887401454137.172 4967120009.5028763
internal avgi, sd: 41894988589984.109 4318086453.2893734
internal avgi, sd: 41895218930734.938 3855189670.2831044
internal avgi, sd: 41893628691682.039 3569029881.5161963
internal avgi, sd: 41895702521658.094 3301046354.0162683
internal avgi, sd: 41894921420510.164 3068605199.3146548
internal avgi, sd: 41894380982483.836 2884341089.4262500
internal avgi, sd: 41894136940077.953 2719744511.4164872
internal avgi, sd: 41894755045877.328 2575585554.2240872
internal avgi, sd: 41894180414331.000 2448105950.1048141
internal avgi, sd: 41892974586371.242 2335940686.3421283
internal avgi, sd: 41892018243977.422 2237743190.2910728
internal avgi, sd: 41892128399199.852 2151548541.7080536
internal avgi, sd: 41891054946079.172 2079987935.2725692
internal avgi, sd: 41890529496649.336 2009220806.5744867
internal avgi, sd: 41889627128683.867 1952022430.9618585
internal avgi, sd: 41889091169697.750 1901209181.0766854
internal avgi, sd: 41889491086513.711 1851335964.4142988
internal avgi, sd: 41889024177143.492 1804584253.3775585
internal avgi, sd: 41888652800094.414 1763879105.2402925
internal avgi, sd: 41888186242209.695 1734933321.8742726
internal avgi, sd: 41888838662647.031 1702558920.3787835
internal avgi, sd: 41888878166048.805 1664687915.8245957
internal avgi, sd: 41888871786161.102 1628412032.4284451
internal avgi, sd: 41888673286317.961 1598222188.0324447
internal avgi, sd: 41888363240043.000 1570566301.1038322
internal avgi, sd: 41888533287695.047 1541834130.0455377
internal avgi, sd: 41888087919550.688 1514438031.2038479
internal avgi, sd: 41887838382975.297 1487390337.2575607
internal avgi, sd: 41887692329889.953 1465777595.5131192
internal avgi, sd: 41887528786746.531 1450928637.2665195
internal avgi, sd: 41887814931451.086 1432276674.4390638
internal avgi, sd: 41887764015763.508 1409275835.0397925
internal avgi, sd: 41887871329949.469 1388512123.7455208
internal avgi, sd: 41887728279057.234 1369450030.9152539
internal avgi, sd: 41887673022843.117 1350554230.4978600
internal avgi, sd: 41887824787223.562 1332418851.3856776
internal avgi, sd: 41887720562604.266 1315515744.3643637
internal avgi, sd: 41887747627717.641 1297906527.3172038
internal avgi, sd: 41887385610296.359 1280408319.1259799
internal avgi, sd: 41887163475026.672 1262887224.9655898
internal avgi, sd: 41887020587065.422 1248392301.3985555
internal avgi, sd: 41886965905979.375 1236043197.7830524
internal avgi, sd: 41887132288349.984 1225259563.1290646
internal avgi, sd: 41887118281531.000 1211732414.0191844
internal avgi, sd: 41887256099447.883 1199948076.9753296
internal avgi, sd: 41887425753145.656 1188759649.9116085
internal avgi, sd: 41887079359692.539 1176252324.5589268
points: 10* 1M + 50* 10M random seed: 70998
part: m2enng0 xicut: 1.00000 delcut: 1.00000
points on phase space 321225751 thereof fucked up 0
result, error: { 4.18871E+13, 1.17625E+09 }; chisq: 1.27
 *  *  *  *  *  *  * 
McMule begins by printing some autoversioning information (the SHA1 hashes of the source code and the git version) as well as some userdefined information from the subroutine inituser()
.
Next, the integration begins.
After every iteration, McMule prints the current best estimate and error of the total cross section or decay rate.
Before exiting, it will also print again the input used as well as the number of points evaluated and the final result.
This run took approximately 15 minutes.
Variable name 
Data type 
Comment 



calls / iteration during preconditioning 


iterations during preconditioning 


calls / iteration during main run 


iterations during main run 


random seed \(z_1\) 


the \(0<\xi_{c}\le1\) parameter 


the \(\delta_{\text{cut}}\) parameter (or at NNLO the second \(\xi_{c}\)) 

the part of the calculation to perform 

(opt) 
unknown 
the user can request further input during 
Analysing the output
After running McMule we want to calculate the actual cross section or decay rate and make plots.
The McMule output is saved to the out/
folder as a .vegas
file that contains the entire state of the integrator (cf. Section Differential distributions and intermediary state files).
We can open this file in python and make plots.
While it is possible to open just a single file using pymule.vegas.FileRecord.read()
, this is rarely done as realworld calculations can involve hundreds of .vegas
files.
Instead, we move the .vegas
file into a new directory, say example1
and then use sigma()
and mergefks()
.
1from pymule import *
2
3lifetime = 1/(1000*(6.582119e25)/(2.903e13))
4# define vegas directory
5setup(folder="example1/")
6dat = mergefks(sigma("m2enng0")) * GF**2*alpha*lifetime
7
8dat
9# <vegas3 Record m2enng0 xsec = 0.0183403(5), 2 histograms>
10dat.histograms
11# ['minv', 'Ee']
Warning
In McMule the numerical value of the Fermi constant \(G_F\) and the finestructure constant \(\alpha\) are set to one for predominately historical reasons. This needs to be restored in python
The variable dat
now contains the runtime (time
), branching ratio (after multiplication with the lifetime, value
), and \(\chi^2\) of the integration (chi2a
) as well as our distributions (Ee
and minv
).
Numerical values such as cross sections or branching ratios are stored as numpy arrays with errors as np.array([y, dy])
.
Distributions are stored as numpy \(N\times 3\) matrices
np.array([[x1, y1, dy1],
[x2, y2, dy2],
[x3, y3, dy3],
[x4, y4, dy4],
[x5, y5, dy5],
...
[xn, yn, dyn]])
These can be manipulated eg. using the tools of pymule described in Section pymule user guide.
For now, we will just make a plot of the \(E_e\) distribution Ee
14fig = plt.figure()
15errorband(dat['Ee'])
16plt.ylabel(r'$\D\mathcal{B}/\D E_e$')
17plt.xlabel(r'$E_e\,/\,{\rm MeV}$')
18mulify(fig)
19fig.savefig("dummy.svg")
Running at NLO and beyond
A few things change once we go beyond LO since we can have extra radiation.
To account for this, more which_piece
need to be ran and then correctly combined.
This also increases the number of runs necessary, meaning that the manual approach from above is no longer feasible.
Setting McMule up
Referring back to Section Available processes and which_piece we find that we need the pieces m2enngF
and m2enngR
for virtual and real corrections respectively.
The PID table of Section Particle ID tells us that the real photon can is going to be q6
.
We first need to decide whether we want to calculate exclusive or inclusive decays.
The details here depend on the exact experimental situation which can be tricky to properly implement.
Following the BaBar analysis [14, 18] we will consider the exclusive radiative decay, i.e. we request precisely one photon with energy \(E_\gamma>10\,{\rm MeV}\).
The function quant()
will have to take this into account with the additional argument q6
, the momentum of the second photon.
To ensure IR safety, we define the harder and softer photon gh
and gs
, respectively, and require that the former (latter) has energy larger (smaller) than \(10\,{\rm MeV}\).
This new version of quant()
is also suitable for the LO calculation and it is generally advisable to use a single quant()
function for all parts of a computation.
64 FUNCTION QUANT(q1,q2,q3,q4,q5,q6,q7)
65
66 real (kind=prec), intent(in) :: q1(4),q2(4),q3(4),q4(4), q5(4),q6(4),q7(4)
67 real (kind=prec) :: quant(nr_q)
68 real (kind=prec) :: gs(4), gh(4)
69 !! ==== keep the line below in any case ==== !!
70 call fix_mu
71
72 if (q5(4) > q6(4)) then
73 gh = q5 ; gs = q6
74 else
75 gh = q6 ; gs = q5
76 endif
77
78 if (gh(4) < 10.) pass_cut = .false.
79 if (gs(4) > 10.) pass_cut = .false.
80
81 names(1) = 'minv'
82 quant(1) = sqrt(sq(q2+gh))
83
84 names(2) = 'Ee'
85 quant(2) = q2(4)
86
87 END FUNCTION QUANT
Running McMule
The FKS scheme used in McMule introduces an unphysical parameter called \(\xi_c\) that can be varied between
Checking the independence of physical results on the latter serves as a consistency check, both of the implementation of McMule but also of the IR safety of the measurement function.
To do this, it can help to disentangle m2enngF
into m2enngV
and m2enngC
though this is not necessary.
Only the latter depends on \(\xi_c\) and this part is typically much faster in the numerical evaluation.
A particularly convenient way to run McMule is using menu files [1].
A menu file contains a list of jobs to be computed s.t. the user will only have to vary the random seed and \(\xi_{c}\) by hand as the statistical requirements are defined globally in a config file.
This is completed by a submission script, usually called submit.sh
.
The submit script is what will need to be launched which in turn will take care of the starting of different jobs.
It can be run on a normal computer or on a SLURM cluster [27].
To prepare the run in this way we can use pymule
$ pymule create i
What generic process? [m2enn] m2enng
How many / which seeds? [5]
Which xi cuts? [[0.5, 0.25, 0.125]]
Where to store data? [m2enngtaue] example2
Which pieces? [['0', 'V', 'R']] 0, V, C, R
How much statistics for 0 (pc, pi, c, i)? [(10000, 20, 100000, 100)] 1000,10,1000,50
How much statistics for V (pc, pi, c, i)? [(10000, 20, 100000, 100)] 1000,10,1000,50
How much statistics for C (pc, pi, c, i)? [(10000, 20, 100000, 100)] 1000,10,1000,50
How much statistics for R (pc, pi, c, i)? [(10000, 20, 100000, 100)] 5000,50,10000,100
Building files. To rerun this, execute
pymule create\
seeds 70998 66707 69184 75845 63937 \
xi 0.5 0.25 0.125 \
genprocess m2enng \
outputdir babartaue \
prog mcmule \
stat R,5000,50,10000,100 \
stat 0,1000,10,1000,50 \
stat V,1000,10,1000,50 \
stat C,1000,10,1000,50
Expect 3750 iterations, 20.250000G calls
Created menu, config and submit script in example2
Please change the ntasks and time options accordingly
When using the tool, we are asked various questions, most of which have a default answer in square brackets.
In the end pymule will create a directory that the user decided to call example2
, where all results will be stored.
The menu and config files generated by pymule are shown in Listing 6 and Listing 7
[McMule]
# specify the program to run relative to `pwd`
mcmule = "mcmule"
# Specify the variables nenter_ad, itmx_ad, nenter and itmx
# for each piece you want to run.
[McMule.statistics.vegas]
m2enng0 = { ncall_pre = 1000, itmx_pre = 10, ncall = 1000, itmx = 50 }
m2enngV = { ncall_pre = 1000, itmx_pre = 10, ncall = 1000, itmx = 50 }
m2enngC = { ncall_pre = 1000, itmx_pre = 10, ncall = 1000, itmx = 50 }
m2enngR = { ncall_pre = 5000, itmx_pre = 50, ncall = 10000, itmx = 100 }
We now need to ensure pymule finds the mcmule
executable, either by copying it into the directory pymule just created or by adapting the path in the config.
After copying the user library user.so
into the same folder, we can start mcmule
by going into the directory example2
that pymule just created and running
cd example2
cp /path/to/mcmule .
cp /path/to/user.so .
pymule batch shell menum2enng.menu
Note that per default this will spawn at most as many jobs as the computer pymule ran on had CPU cores.
If the user wishes a different number of parallel jobs, change the fifth line of example2/submit.sh
to
pymule batch shell np <number of cores> menum2enng.menu
When running on a SLURM system, we need to use a SLURM submission script , adapted to the needs of the cluster mcmule
is running on
#!/bin/bash
#SBATCH partition=<partition>
#SBATCH time=<time limit>
#SBATCH ntasks=<number of jobs>
#SBATCH clusters=<name of the cluster>
#SBATCH output=slurm%%j.out
pymule batch slurm menum2enng.menu
Analysing the results
After running the code, we need to combine the various which_piece
into physical results that we will want to use to create plots.
This is the moment where pymule’s mergefks()
shines.
from pymule import *
# To normalise branching ratios, we need the tau lifetime
lifetime = 1/(1000*(6.582119e25)/(2.903e13))
# The folder where McMule has stored the statefiles
setup(folder='example2/out.tar.bz2')
# Import LO data and rescale to branching ratio
LO = mergefks(sigma('m2enng0')) * GF**2*lifetime*alpha
# Import NLO corrections from the three pieces
NLO = mergefks(
sigma('m2enngR'), # real corrections
sigma('m2enngC'), # counter term
anyxi=sigma('m2enngV') # virtual corrections
) * GF**2*lifetime*alpha**2
# The branching ratio at NLO = LO + correction
fullNLO = LO + NLO
# Print results
print("BR_0 = ", printnumber(LO.value))
print("dBR = ", printnumber(NLO.value))
# Produce energy plot
fig1, (ax1, ax2) = kplot(
{'lo': LO['Ee'], 'nlo': NLO['Ee']},
labelx=r"$E_e\,/\,{\rm MeV}$",
labelsigma=r"$\D\mathcal{B}/\D E_e$"
)
ax2.set_ylim(0.2,0.01)
# Produce visible mass plot
fig2, (ax1, ax2) = kplot(
{'lo': LO['minv'], 'nlo': NLO['minv']},
labelx=r"$m_{e\gamma}\,/\,{\rm MeV}$",
labelsigma=r"$\D\mathcal{B}/\D m_{e\gamma}$"
)
ax1.set_yscale('log')
ax1.set_xlim(1000,0) ; ax1.set_ylim(5e9,1e3)
ax2.set_ylim(0.2,0.)
Once pymule is imported and setup, we import the LO and NLO which_piece
and combine them using two central pymule commands that we have encountered above: sigma()
and mergefks()
.
sigma()
takes the which_piece
as an argument and imports matching results, already merging different random seeds.
mergefks()
takes the results of (multiple) sigma()
invocations, adds results with matching \(\xi_{c}\) values and combines the result.
In the present case, \(\sigma_n^{(1)}\) is split into multiple contributions, namely m2enngV
and m2enngC
.
This is indicated by the anyxi
argument.
Next, we can use some of pymule’s tools (cf. Listing Listing 8) to calculate the full NLO BRs from the corrections and the LO results
which agree with [10, 21], but \(\mathcal{B}_{\text{NLO}}\) is in tension with the value \(\mathcal{B}_{\text{exp}} = 1.847(54)\times 10^{2}\) reported by BaBar [14, 18]. As discussed in [21, 24] it is very likely that this tension would be removed if a full NLO result was used to take into account the effects of the stringent experimental cuts to extract the signal. This issue has been explained in detail in [21, 24, 25].
As a last step, we can use the matplotlib
backed kplot()
command to present the results for the distributions (logarithmic for \(m_{e\gamma}\) and linear for \(E_e\)).
The upper panel of Figure 2 shows the results for the invariant mass \(m_{e\gamma}\) at LO (green) and NLO (blue) in the range \(0\le m_{e\gamma} \le 1\,{\rm GeV}\).
Note that this, for the purposes of the demonstration, does not correspond to the boundaries given in the run.
The distribution falls sharply for large \(m_{e\gamma}\). Consequently, there are only few events generated in the tail and the statistical error becomes large. This can be seen clearly in the lower panel, where the NLO \(K\) factor is shown. It is defined as
and the band represents the statistical error of the Monte Carlo integration. To obtain a reliable prediction for larger values of \(m_{e\gamma}\), i.e. the tail of the distribution, we would have to perform tailored runs. To this end, we should introduce a cut \(m_{\text{cut}}\ll m_\tau\) on \(m_{e\gamma}\) to eliminate events with larger invariant mass. Due to the adaption in the numerical integration, we then obtain reliable and precise results for values of \(m_{e\gamma} \lesssim m_{\text{cut}}\).
Figure 3 shows the electron energy distribution, again at LO (green) and NLO (blue). As for \(m_{e\gamma}\) the corrections are negative and amount to roughly \(10\%\). Since this plot is linear, they can be clearly seen by comparing LO and NLO. In the lower panel once more the \(K\) factor is depicted. Unsurprisingly, at the very end of the distribution, \(E_e\sim 900\,{\rm MeV}\), the statistics is out of control.
More complicated runs
To demonstrate some of McMule capabilities, we tweak the observable a bit. Since the tau is usually produced in \(e^+e^\to \tau^+\tau^\), we instead use the LO cross section
with \(\sqrt s = m_{\Upsilon(4S)} = 10.58\,{\rm GeV}\).
To accurately simulate this situation, we need to
choose a random value for \(\theta\),
construct the tau momentum \(p_1\) in the lab frame,
boost the momenta from McMule into this frame, and
apply a correction weight from (1)
for every event. We require the following cuts in the lab frame
the produced electron and hard photon have \(0.75 \le \cos\theta_{i,e^} \le +0.95\)
the hard photon energy is bigger than 220 MeV
Further, we want to have a switch for inclusive and exclusive measurements without having to adapt the user file.
Asking for user input
To be able to switch cuts on and off, we need to read input from the user at runtime.
This can be done in the inituser()
routine where input can be read
.
We can store the result in a global variable (exclusiveQ
) so we can later use it in quant()
.
Further, we need modify the name of the vegas file by changing filenamesuffix
.
It is also good practice to print
the configuration chosen for documentation purposes.
58 SUBROUTINE INITUSER
59 call initflavour("taue")
60 read*, exclusiveQ
61
62 if(exclusiveQ == 1) then
63 print*, "Calculating tau>e nu nu gamma in ee>tau tau exclusive"
64 filenamesuffix = "e"
65 else
66 print*, "Calculating tau>e nu nu gamma in ee>tau tau inclusive"
67 filenamesuffix = "i"
68 endif
69
70 ! Let the tau be unpolarised
71 pol1 = (/ 0._prec, 0._prec, 0._prec, 0._prec /)
72 END SUBROUTINE
Note
When using the menu file system, this can only be a single integer. To read multiple bits of information, you need to encode the data somehow.
Generation of the tau momentum
We can use the user integration feature of McMule to generate \(\cos\theta\). This allows us to write
with a generalised measurement function \(f\).
Since (1) is sufficiently simple, we will sample \(\cos\theta\) with a uniform distribution and apply a correction weight rather trying to sample it directly.
We set the variable userdim
to one to indicate that we want to carry out \(m=1\) extra integrations and define the function userevent()
that sets global variable cth
for \(\cos\theta\)
135 SUBROUTINE USEREVENT(X, NDIM)
136 integer :: ndim
137 real(kind=prec) :: x(ndim)
138
139 cth = 2*x(1)  1
140 userweight = (1+Mm**2/Etau**2) + (1Mm**2/Etau**2) * cth
141 END SUBROUTINE USEREVENT
Warning
This function can be used to change the centreofmass energy and masses of the particles. However, one must the recompute the flux factors and \(\xi_{\text{max}}\) relations.
Boosting into the lab frame
We begin by writing down the momentum of tau in the lab frame as
with \(\vec p = \sqrt{E^2m_\tau^2}\).
Next, we use the McMule function boost_back()
to boost the momenta we are given into the lab frame.
From there we can continue applying our cuts as before, utilising the McMule function cos_th
to calculate the angle between the particle and the beam axis.
75 FUNCTION QUANT(q1,q2,q3,q4,q5,q6,q7)
76
77 real (kind=prec), intent(in) :: q1(4),q2(4),q3(4),q4(4), q5(4),q6(4),q7(4)
78 real (kind=prec) :: ptau, cos_e, cos_g
79 real (kind=prec) :: p1Lab(4), p2Lab(4), p5Lab(4), p6Lab(4)
80 real (kind=prec) :: quant(nr_q)
81 real (kind=prec) :: gs(4), gh(4)
82 real (kind=prec), parameter :: ez(4) = (/ 0., 0., 1., 0. /)
83 !! ==== keep the line below in any case ==== !!
84 call fix_mu
85 pass_cut = .true.
86
87 ptau = sqrt(Etau**2Mtau**2)
88 p1Lab = (/ 0., ptau*sqrt(1cth**2), ptau*cth, Etau /)
89
90 p1Lab = boost_back(p1Lab, q1)
91 p2Lab = boost_back(p1Lab, q2)
92 p5Lab = boost_back(p1Lab, q5)
93 p6Lab = boost_back(p1Lab, q6)
94
95 if (p5Lab(4) > p6Lab(4)) then
96 gh = p5Lab ; gs = p6Lab
97 else
98 gh = p6Lab ; gs = p5Lab
99 endif
100
101 cos_e = cos_th(p2Lab, ez)
102 cos_g = cos_th(gh , ez)
103
104 if ( (cos_e > 0.95 .or. cos_e < 0.75) .or. (cos_g > 0.95 .or. cos_g < 0.75) ) then
105 pass_cut = .false.
106 return
107 endif
108
109 if(exclusiveQ == 1) then
110 if (gh(4) < 220. .or. gs(4) > 220.) then
111 pass_cut = .false.
112 return
113 endif
114 else
115 if (gh(4) < 220.) then
116 pass_cut = .false.
117 return
118 endif
119 endif
120
121 names(1) = 'minv'
122 quant(1) = sqrt(sq(q2+gh))
123
124 names(2) = 'Ee'
125 quant(2) = p2Lab(4)
126
127 names(3) = 'cos_e'
128 quant(3) = cos_e
129
130 names(4) = 'cos_g'
131 quant(4) = cos_g
132 END FUNCTION QUANT
Running and analysis
At this point we can run McMule and proceed with the analysis as before.
We need to do two runs, one for the exclusive and one for the inclusive.
However, only the real corrections differ, therefore we only need 24 runs and not 36.
The last argument of the run
command in the menu file will be passed as the observable we have defined in inituser()
.
We need to pass 1
(exclusive) or 0
(inclusive) as shown in Listing 9 [2]
We can now run McMule.
When analysing the output we need to take care to not mix the different observables which we do by passing the optional argument obs
to sigma()
.
The resulting plot is shown in Figure 4
# Loading the LO is the same as before
setup(folder='example3/out.tar.bz2')
LO = mergefks(sigma('m2enng0')) * GF**2*lifetime*alpha
# Import the excl. NLO by specifying the observable
# for the real corrections
NLOexcl = mergefks(
sigma('m2enngR', obs='e'),
sigma('m2enngC'),
anyxi=sigma('m2enngV')
) * GF**2*lifetime*alpha**2
fullNLOexcl = LO + NLOexcl
NLOincl = mergefks(
sigma('m2enngR', obs='i'),
sigma('m2enngC'),
anyxi=sigma('m2enngV')
) * GF**2*lifetime*alpha**2
fullNLOincl = LO + NLOincl
print("BR_0 = ", printnumber(LO.value))
print("BRexcl = ", printnumber(fullNLOexcl.value))
print("BRincl = ", printnumber(fullNLOincl.value))
fig3, (ax1, ax2) = kplot(
{
'lo': scaleplot(LO['Ee'], 1e3),
'nlo': scaleplot(NLOexcl['Ee'], 1e3),
'nlo2': scaleplot(NLOincl['Ee'], 1e3)
},
labelx=r"$E_e\,/\,{\rm GeV}$",
labelsigma=r"$\D\mathcal{B}/\D E_{e}$",
legend={
'lo': '$\\rm LO$',
'nlo': '$\\rm NLO\ exclusive$',
'nlo2': '$\\rm NLO\ inclusive$'
},
legendopts={'what': 'l', 'loc': 'lower left'}
)
ax2.set_ylim(0.12,0.02)