https://wiki.itap.purdue.edu/download/attachments/44925561/AN2013_420_v15.pdf?api=v2Overall completeness: 90%
Drell-Yan analysis Procedure
This twiki documents the most important steps of the Drell-Yan cross section measurement. It is intended to familiarize you with the technical aspects of the analysis procedure.
The pdf file of the AN-13-420 attached here contains the notes with the macro name used to produce each plot.
Step 1: Producing ntuples
- The CMSSW_53X MC samples are used for 8 TeV analysis. Below is the list of starting GEN-SIM-RECO samples used in the muon and electro analyses:
We use SingleMu and DoubleMu Primary Datasets (PD), January2013 ReReco version
- JSONs: Cert190456-2086868TeV22Jan2013ReRecoCollisions12JSON.txt, Jan22Jan2013
- Double muon and double electron samples are used for the main analysis, single muon samples are used for the efficiency correction estimation steps. Other samples are used for the backgrounds estimation purposes.
- Relevant software: CMSSW_5_3_3_patch2
Note: that for proper compilation slc5 machine is necessary, and the code might not compile out of the box on slc6 or later CMSSW release versions (it would need to be ported first).
To simply perform a local test of the ntuple-maker run:
to produce the ntuples over full dataset use CRAB:
Step 2: Event Selection
First, you will have to get a custom rootlogon file. Note: some of the libraries loaded in this rootlogon might interfere with the Proof environment in your machine.
Once the ntuples are ready, one can proceed to the actual physics analysis. The first step of the analysis is the event selection. Currently, we use the so-called cut-based approach to discriminate between signal and background. For more on event selection read chapter 5 in the analysis note CMS-AN-13-420. Before starting to run a macro, set up the working area. Find all the necessary scripts in:
The code for event selection consists of 3 main files (and a few auxiliary). First of all the TSelector class which is customized for event selection used in a given analysis, necessary weights (pileup, FEWZ and momentum scale correction) are applied in the macro. The Monte-Carlo weights are also hardcoded inside the macro for each MC sample used. Next, is the wrapper ROOT macro which calls the TSelector to run on a given dataset. This wrapper is shown below, and explained step-by-step:
- There is one extra level here - the python script. It calls the above ROOT wrapper macro and typically looks like this:
Once this is understood, one can run the macro. To produce plots like 35-37 use the analyse.py macro, which calls the wrapper for TSelector for the DY analysis (as described above):
Important information about the reweightings. Pileup reweighing is accessed from the ntuple, directly from the branch on a per event basis. The FEWZ weights are extracted from theoretical calculation, and are provided as arrays inside the efficiencyWeightToBin2012.C file located in the same directory (or any other directory, as long as there is an appropriate include in the header of the TSelector). The FEWZ weights are looked up based on the GEN mass as follows inside the code, only for signal MC:
To Finally, the Rochester momentum scale correction recipe is described here: http://www-cdf.fnal.gov/~jyhan/cms_momscl/cms_rochcor_manual.html
Few words about the normalization. The data events are not renormalized. The MC weights are weighted according to the probability of each event to be observed in a real collision event and according to the number of events generated in the sample. Therefore
For better accuracy we use the number of events actually ran on, rather than the number generated. We calculate it in the event loop, and apply it in the EventSelector::Terminate() method. In both the 7 and 8 TeV analysis, we normalized the MC tack (signal and backgrounds) to the number of events in data in the Z peak region (before the efficiency corrections). A special post-processing macro takes care of this:
This python script adds up individual ROOT files with hadd and invokes ROOT macros parser.C and parser_2D.C which has a method for normalization of MC stack to data in the Z peak region.
After that, switch to the Dielectron working directory and produce necessary yield histograms before continuing with the style plotting
Inspect the wrapper_EE.sh file inside and set the do_selection flag to 1 (true), and check the input files to run on are properly specified in the conf_file
Then run in two steps: (1) produce reduced ntuples, (2) prepare binned yields for analysis
To switch between 1D and 2D cases open the ../Include/DYTools.hh file and change the flag to const int study2D=1;.
After that, the style macro is used to plot the publication quality plots.
the style macro is used This would plot the 1D yields distribution (the switch between the electrons and muons is done manually inside the macro by adjusting the paths).
To plot the 2D distributions do:
Step 3: Acceptance and Efficiency estimation
Another constituent of the cross-section measurement is the acceptance-efficiency.
- Acceptance is determined using GEN level information
To be able to produce the acceptance and efficiency one needs to change to a different folder, and run a different TSelector. But the general flow TSelector->ROOT wrapper->python wrapper is almost the same:
The script will produce the root file with histograms corresponding to the mass and rapidity spectra after the acceptance cuts, selection cuts or both which are then used to calculate the acceptances, efficiencies and acceptance-efficiency products with and without pileup and FEWZ reweighing by executing:
To get the corresponding distributions in the electron channel change to FullChain folder:
The macro output a root file starting with out1* or out2* containing the histograms corresponding to the acceptance, efficiency and their product. To produce the publication level plots, the style macro described in the previous section needs to be used again
To get the 2D plots do:
Step 4: Data-driven efficiency correction
Only in the muon channel, the electron efficiency scale factors are obtained from the EGamma group, and not re-measured independently.
Next, the data-driven efficiency corrections are applied. This is done using the standard CMSSW recipe, so a lot of additional packages needs to be checked out. Follow this twiki: https://twiki.cern.ch/twiki/bin/viewauth/CMS/MuonTagAndProbe to set up your working area for the ntuple production (alternatively, one can use the trees already produced!)
- The procedure goes in two steps: T&P tree production -> rerun seldom (ideally once), it depends only on the definitions of the tag and probe
- If you haven't produced TP trees you can always use the official ntuples located as described in MuonTagAndProbe twiki:
- Second step of the procedure is fitting: separate job for trigger and all the muonID related efficiencies -> reran frequently and usually interactively (change binning, definitions)
After familiarizing yourself with the TagAndProbe package, you need to produce the muon efficiencies as a function of pT and eta. You can use the wrapper.py script specifying which variables to bin the efficiency in and what runs/MC samples to process.
Finally, produce the plots with
Step 5: Background estimation
QCD data driven background estimation
In 8 TeV analysis, the main method to estimate the QCD background in the dimuon channel is the ABCD method (the fake-rate method is used in the electron channel). Before starting, let me summarize the ABCD method in a nutshell:
1) choose 2 variables: assume two variables are independent
2) assume the fraction should be same if there is no correlation: N_A / N_B = N_C / N_D
3) In our study, use two variables: sign of muon pair, muon isolation
4) QCD fraction in each region has a dependence. We produce the correction factor for each region: B, C, D
5) Produce N_B, N_C, N_D from data sample, and estimate N_A from them at the end (applying the correction factors)
Now, let's go step by step.
First, change to the ABCD folder:
The procedure consists of few steps and is guided by the wrapper.py script located inside the folder:
Thus, for each of the MC samples and for the real data a set of sequences is ran. First the QCDFrac_*.py, which invoke the EventSelector_Bkg.C TSelector class for various values of charge and isolation (the variables defining the signal and background regions), based on the histograms filled, the coefficients are calculated. Second, the qcdFracHadder.py scripts is ran on the on the output of the first step. It is a utility script which repacks the histograms in an appropriate format. Third, the ABCD2vari_init.py script which actually performs the etiolation of ABCD coefficients in each region. Finally, the ABCD2vari_*.py scripts invoke the EventSelector_Bkg2.C TSelector class, passing the ABCD coefficients as TObjString objects inside the macro.
The post-processing and the output harvesting step is performed by the following python script:
It uses the output of the second TSelector as an inout, hadds it and produces a root file with th histogram which is then used in the analysis.
E-mu data-driven background estimation method
To estimate all the non-QCD backgrounds we employ the so-called e-mu data driven background estimation method. The same method is applied in the muon and electron channels. The code used for that purpose was originally adapted from Manny and it uses the so-called Bambu workflow. First, let's change into the e-mu working directory:
First, reduced ntuples are generated from the original Bambu ntuples:
The above script needs to be ran twice in 2 modes: SS (same-sign pairs) and OS (opposite-sign pairs). The switch is don win the selectEmuEvents.C script by switching:
And also changing the ntupDir name. One will have to edit the data_emu.conf to point to the local ntuples before running.
After running this step, the reduced ntuples should be output to a directory (../root_files/selected_events/DY/ntuples/EMU/). One would also need to run selectEvents.C to generate reduced electron ntuples.These ntuples must contain two branches, mass (dilepton invariant mass) and weight. After this is done, the e-mu macro can be ran:
This macro is also ran in two regimes: using SS and OS ntuples as an input, and the proper true2eBackground file are produced and saved. The reason why we need to rerun SS and OS cases is because we rely on this for estimation of missing QCD contribution in the e-mu spectrum. These true2eBackground files and the dilepton yields serve as an input to the final step of e-mu background estimation, the production of a final root file with histograms:
As you can see, 4 different macros are re-ran for electrons, muons, 1D and 2D.
One other source of background considered in this analysis is the photon induced background. This background is irreducible and is not estimated based on MC. The bulk of the calculations of this background is done in FEWZ3, by switching the photon induced components on and off. Once the output files are ready, one can simply parse them, get the bin-by-bin correction:
Following scripts can be used to visualize and compare the PI background yields:
Once the correction is prepared in a root file, it is simply loaded in the shapeR plotting macro as discussed in the sections below.
Step 6: Unfolding
Unfolding is applied to correct for migration of entries between bins caused by mass resolution effects (FSR correction is taken into account as a separate step, although it also uses the unfolding technique). In 8 TeV analysis, we use the iterative Bayesian unfolding technique. Provides a common interface between channels for symmetry and ease in combination and systematic studies. Both the iterative Bayesian and matrix inversion technique (used in 7 TeV) are implemented and described below.
To do any unfolding with MC, this requires 3 things:
- Producing the response matrix
- Making the histogram of measured events
- Making the true histogram & closure test
First, change to the unfolding working directory (common for electron and muons).
The main steps for unfolding procedure go as follows:
1. Produce the response matrix.
2. To produce the unfolded yield
3. Visualize the yields and ratios of yields
The first step is rather time consuming, and is done by:
which takes care of the response matrix production for both the 1D and 2D cases. To visualize the resulting response matrices do
There is a switch inside this macro allows to change 1D and 2D plots, written with a comment inline. Once this is done, one can continue to apply the unfolding technique. Open the unfold.C file and familiarize yourself with various flags (pre-processor pragmas to be more precise), namely:
Above is the default setting, as we use iterative Bayesian method as default for 8 TeV in both channels. First run in the closure test mode by setting the run to 'POWHEG' inside the wrapper unfold.py, then switch to '' flag which will do the actually unfolding – we do not make any distinction between the run ranges in 8 TeV as the scale corrections are run independent
Repeat all for 2D:
Then to repeat for electrons change to the FullChain folder and set the appropriate flags for runnings the response matrix production step
That summarizes the unfolding step, and the output of this step will be used on the following analysis steps.
Step 7: FSR correction
The effect of FSR is manifested by photon emission off the final state lepton. It leads to a change of the dimuon invariant mass and as a result a dilepton has invariant mass distinct from the propagator (or Z/gamma*) mass.
For our analysis we estimate the effect of FSR and the corresponding correction by estimating the unfolding correction in invariant mass and rapidity bins. This is done by applying an exact same unfolding procedure as for the mass resolution effects described above. A minor difference is that we also apply bin-by-bin corrections for event classes that do not enter the response matrix.
Change to the FSRunfold directory
And run the similar steps as above:
This script will give you the response matrix in 1D and 2D and also additional bin-by-bin corrections for events not entering the response matrix. In addition, there is an option to run fully bin-by-bin as a cross check. If you inspect the contents of this python script, you will be able too understand what actually is done. After the jobs are complete, you need to merge the individual root files using hadd, and then run the fracEff.C script to extract the additional corrections:
Similarly to the detector resolution unfolding step, you can inspect the response matrix:
To get these all quantities in the electron channel, similarly to the detector resolution unfolding case you just need to:
Step 8: Cross section calculation
Once all the constituents of the cross section are in place, one can continue with the cross section calculation results. First, the results are calculated in each individual channel and then they are combined using the BLUE method (as described in the Step 10 section). To calculate the 1D cross section in muon channel change to:
this will produce an output root file in the ../Outputs directory. All the necessary input files are expected to be available in the ../Inputs directory. To get the 2D cross section change to
The output file is also going to be created in the ../Outputs directory. To get the electron cross section do as usually:
One would have to rerun this step twice switching the flag between 1D and 2D.
This produces the necessary root files with the histograms of the cross section and uncertainties
Step 9: Electron-muon combination with the BLUE method
Having the root files for individual cross section measurements i the dielectron and dimuon channels, we need to combine them for a higher precision. The combination is performed with the BLUE method, which takes 2 vectors of measured values of the cross section and the covariance matrices.
First, we need to make sure that the inputs are in the form the BLUE macro expects it (i.e. ASCII, not root):
We can use the txt2Plot.py macro to validate the txt input by visualizing it.
After we have the inputs in proper format, we just need to run the resultCombiner.C macro. To pass all the inputs properly (which should be in the current folder), we specify them in the wrapper.py script and run it as
The output will be the ASCII format again, but we normally need it in root. So we have to run another converter file after we finished:
After that, we have the root file with the cross section histogram of the same format as we have for the individual cross sections, and we can visualize it (produce a plot for the publication) on the same step as we did for other cross sections in the previous section
Step 10: Double ratio calculations
Once the cross actions have been obtained, the double ratios – ratios of the normalized differential and double-differential cross sections – can be calculated. Most of the macros for the double ratio calculation is located in the ../ShapeR folder, so first change to that folder:
Produce the double ratios and uncertainties:
Step 11: Plotting the final results
The final results are the absolute cross sections in bins of mass and rapidity in dielectron, dimuon channels and combination. As well as the double ratios. To plot the 1D differential cross sections do:
To plot the 1D double ratios (switch between the lepton channels is inside):
To plot the 2D cross sections and double ratios do:
To get all the up to date values for the shape r/R use: