Child pages
  • Drell-Yan analysis Procedure (8 TeV)
Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 10 Next »

Overall completeness: 0%

Drell-Yan analysis Procedure

This twiki documents the most important steps of the Drell-Yan cross section measurement. It is intended to familiarize you with the technical aspects of the analysis procedure. 

Step 1: Producing ntuples

Completeness: 95%

  • Samples
  • The CMSSW_53X MC samples are used for 8 TeV analysis. Below is the list of starting GEN-SIM-RECO samples used in the muon and electro analyses:

DYToMuMuM-10To20 & Powheg-Pythia6 & CT10TuneZ2star 

DYToMuMuM-20 & Powheg-Pythia6 & CT10TuneZ2star 

DYToMuMuM-200 & Powheg-Pythia6 & TuneZ2star 

DYToMuMuM-400 & Powheg-Pythia6 & TuneZ2star 

DYToMuMuM-500 & Powheg-Pythia6 & TuneZ2star 

DYToMuMuM-700 & Powheg-Pythia6 & TuneZ2star 

DYToMuMuM-800 & Powheg-Pythia6 & TuneZ2star 

DYToMuMuM-1000 & Powheg-Pythia6 & TuneZ2star

DYToMuMuM-1500 & Powheg-Pythia6 & TuneZ2star 

DYToMuMuM-2000 & Powheg-Pythia6 & TuneZ2star 

DYToEEM-10To20 & Powheg-Pythia6 & CT10TuneZ2star 

DYToEEM-20 & Powheg-Pythia6 & CT10TuneZ2star 

DYToEEM-200 & Powheg-Pythia6 & TuneZ2star 

DYToEEM-400 & Powheg-Pythia6 & TuneZ2star 

DYToEEM-500 & Powheg-Pythia6 & TuneZ2star 

DYToEEM-700 & Powheg-Pythia6 & TuneZ2star 

DYToEEM-800 & Powheg-Pythia6 & TuneZ2star 

DYToEEM-1000 & Powheg-Pythia6 & TuneZ2star 

DYToEEM-1500 & Powheg-Pythia6 & TuneZ2star 

DYToEEM-2000 & Powheg-Pythia6 & TuneZ2star 

DYToTauTauM-10To20 & Powheg-Pythia6-tauola & TuneZ2star 

DYToTauTauM-20 & Powheg-Pythia6-tauola &CT10TuneZ2star 

WJetsToLNu & madgraph-tarball & TuneZ2star 

WWJetsTo2L2Nu &  madgraph-tauola & TuneZ2star 

WZJetsTo2L2Q &  madgraph-tauola& TuneZ2star 

WZJetsTo3LNu &  madgraph-tauola& TuneZ2star 

ZZJetsTo2L2Nu &  madgraph-tauola& TuneZ2star 

ZZJetsTo2L2Q &  madgraph-tauola& TuneZ2star 

ZZJetsTo4L &  madgraph-tauola& TuneZ2star 

TTMtt-700to1000 &  Powheg-tauola& TuneZ2star 

TTMtt-1000toInf  &  Powheg-tauola& TuneZ2star 

TTJetsFullLeptMGDecays &  madgraph& TuneZ2star 

TTJetsFullLeptMGDecays &  madgraph& TuneZ2star 

TT & Powheg-tauola & TuneZ2star 

TW & Powheg-tauola & TuneZ2star 

TbarW & Powheg-tauola & TuneZ2star 

QCDPt-15to20MuPt5Enriched & Pythia6 &TuneZ2star 

QCDPt-20to30MuPt5Enriched & Pythia6 &TuneZ2star 

QCDPt-30to50MuPt5Enriched & Pythia6 &TuneZ2star 

QCDPt-50to80MuPt5Enriched & Pythia6 &TuneZ2star

QCDPt-80to120MuPt5Enriched & Pythia6 & TuneZ2star

QCDPt-120to150MuPt5Enriched & Pythia6 &TuneZ2star 

QCDPt-150MuPt5Enriched & Pythia6 & TuneZ2star 

MC generation is 53X

  • DATA:
  • We use SingleMu and DoubleMu Primary Datasets (PD), January2013 ReReco version

    /DoubleMu/Run2012A-22Jan2013-v1/AOD : 190645-193621

    /DoubleElectron/Run2012A-22Jan2013-v1/AOD

    /DoubleMuParked/Run2012B-22Jan2013-v1/AOD : 193834-196531

    /DoubleElectron/Run2012B-22Jan2013-v1/AOD

    /DoubleMuParked/Run2012C-22Jan2013-v1/AOD : 198049-203742

    /DoubleElectron/Run2012C-22Jan2013-v1/AOD

    /DoubleMuParked/Run2012D-22Jan2013-v1/AOD : 203777-208686

    /DoubleElectron/Run2012D-22Jan2013-v1/AOD

    /SingleMu/Run2012A-22Jan2013-v1/AOD : 190645-193621

    /SingleMu/Run2012B-22Jan2013-v1/AOD : 193834-196531

    /SingleMu/Run2012C-22Jan2013-v1/AOD : 198049-203742

    /SingleMu/Run2012D-22Jan2013-v1/AOD : 203777-208686

    /MuEG/Run2012A-22Jan2013-v1/AOD

    /MuEG/Run2012B-22Jan2013-v1/AOD

    /MuEG/Run2012C-22Jan2013-v1/AOD

    /MuEG/Run2012D-22Jan2013-v1/AOD

    /Photon/Run2012A-22Jan2013-v1/AOD

    /SinglePhoton/Run2012B-22Jan2013-v1/AOD

    /SinglePhoton/Run2012C-22Jan2013-v1/AOD

    /SinglePhotonParked/Run2012D-22Jan2013-v1/AOD

  • JSONs: Cert190456-2086868TeV22Jan2013ReRecoCollisions12JSON.txt, Jan22Jan2013
  • Double muon and double electron samples are used for the main analysis, single muon samples are used for the efficiency correction estimation steps. Other samples are used for the backgrounds estimation purposes.
  • Relevant software: CMSSW_5_3_3_patch2
    • Use latest global tags for MC data for a given release as documented here
    • DY analysis package (Purdue), DY analysis package (MIT) are used 
cmsrel CMSSW_5_3_3_patch2
cd CMSSW_5_3_3_patch2/src
cmsenv
git cms-addpkg DataFormats/PatCandidates
git cms-addpkg PhysicsTools/PatAlgos
git cms-addpkg PhysicsTools/PatUtils
git clone git@github.com:ASvyatkovskiy/DYAnalysis DimuonAnalysis/DYPackage
scram b -j8
export DYWorkDir=$PWD/DimuonAnalysis/DYPackage
cd $DYWorkDir/ntuples

To simply perform a local test of the ntuple-maker run:

cmsRun ntuple_cfg.py

to produce the ntuples over full dataset use CRAB:

crab -create -submit -cfg crab.cfg
crab -get all -c <crab_0_datetime>

Step 3: Event Selection

Completeness: 60%

Once the ntuples are ready, one can proceed to the actual physics analysis. The first step of the analysis is the event selection. Currently, we use the so-called cut-based approach to discriminate between signal and background. For more on event selection read chapter 5 in the analysis note CMS-AN-13-420. Before starting to run a macro, set up the working area. Find all the necessary scripts in:

cd $DYWorkDir/test/ControlPlots

The code for event selection consists of 3 main files (and a few auxiliary). First of all the TSelector class which is customized for event selection used in a given analysis, necessary weights (pileup, FEWZ and momentum scale correction) are applied in the macro. The Monte-Carlo weights are also hardcoded inside the macro for each MC sample used. Next, is the wrapper ROOT macro which calls the TSelector to run on a given dataset. This wrapper is shown below, and explained step-by-step:

//macro takes 3 arguments, which are passed from the python script. These are: the histogram name (invariant mass, or for instance rapidity), ntuple weight or/custom (this option is deprecated - we always use custom weight), and the type of momentum scale correction (also deprecated - the correction does not depend on the run range in 8 TeV analysis)
void analyseYield(const char* WHICHHIST, const char* NTUPLEWEIGHT, const char* MOMCORRTYPE) {

  // Depending on the directory with data, the protocol used to access data will be different: "file" or "xrootd" are the most commonly used.
  TString protocol = "file://";
  //TString protocol = "root://xrootd.rcac.purdue.edu/";


  //Pointer to the location of the data used. Can be on /mnt/hadoop or on the scratch
  TString dirname = "/mnt/hadoop/store/group/ewk/DY2013/";

  // Next, the TFileCollection is created. This section is specific for each dataset: data or MC, so we prepare this wrapper macro for each sample
  TFileCollection* c1 = new TFileCollection("data","data");
  //Splitting criteria by runs/eras is happening here switch to RunAB, RunC, RunD. This is handy for studies of run dependencies 
  if (MOMCORRTYPE == "RunAB") c1->Add(protocol+dirname+"Data_RunAJan2013_Oct"+"/*.root");
  if (MOMCORRTYPE == "RunAB") c1->Add(protocol+dirname+"Data_RunBJan2013_Oct_p1"+"/*.root");
  if (MOMCORRTYPE == "RunAB") c1->Add(protocol+dirname+"Data_RunBJan2013_Oct_p2"+"/*.root");
  if (MOMCORRTYPE == "RunC1") c1->Add(protocol+dirname+"Data_RunCJan2013_Oct_p1"+"/*.root");
  if (MOMCORRTYPE == "RunC2") c1->Add(protocol+dirname+"Data_RunCJan2013_Oct_p2"+"/*.root");
  if (MOMCORRTYPE == "RunD1") c1->Add(protocol+dirname+"Data_RunDJan2013_Oct_p1"+"/*.root");
  if (MOMCORRTYPE == "RunD2") c1->Add(protocol+dirname+"Data_RunDJan2013_Oct_p2"+"/*.root");

  //Set the location of ProofLite Sandbox. It is more convenient to use the custom path rather than $HOME/.proof
  gEnv->SetValue("ProofLite.Sandbox", "<path to your working dir>/test/ControlPlots/proofbox/");
  
  //splitting criteria: how many worker nodes to use for the run: using more than 10-15 nodes usually will cause instability and lead to a crash subsequently
  TProof* p = TProof::Open("workers=20"); 
  p->RegisterDataSet("DATA", c1,"OV");
  p->ShowDataSets();
 
  //Deprecated - just leave as is, always
  TObjString* useNtupleWeightFlag = new TObjString(NTUPLEWEIGHT);
  p->AddInput(new TNamed("useNtupleWeightFlag",NTUPLEWEIGHT));

  //The histogram should always be "imvm" - it will give both 1D and 2D histograms. But if one needs to study N-1 selection, then the string should be the name of the cut to exclude
  TObjString* histogramThis = new TObjString(WHICHHIST);
  p->AddInput(new TNamed("histogramThis",WHICHHIST));
  //This is now useless, but for later studies it might become useful again, if there is a run dependency for the momentum scale correction
  TObjString* momCorrType = new TObjString(MOMCORRTYPE);
  p->AddInput(new TNamed("momCorrType",MOMCORRTYPE));

  gROOT->Time();
  p->SetParameter("PROOF_LookupOpt", "all");
  //This invokes the TSelector: "recoTree/DiMuonTree" is the name of the ROOT tree inside the file, "EventSelector_CP.C" is the name os the TSelector
  p->Process("DATA#/recoTree/DiMuonTree","EventSelector_CP.C+");
}
  • There is one extra level here -  the python script. It calls the above ROOT wrapper macro and typically looks like this:
#!/usr/bin/env python
from subprocess import Popen

#This normally is just "imvm", but for N-1 control plots like 18-25 in the AN-13-420 one needs to set to a custom cut name, for instance: 'relPFisoNoEGamma','chi2dof','trackerHits','pixelHits','CosAngle','muonHits','nMatches','dxyBS','relPFisoNoEGamma','vtxTrkProb','trigMatches','pT','eta']
histos = ['invm'] 
 
#normally one needs to run over all of them. Splitting to a set of runs is useful because loading very large number of files into one session can cause instability
eras = ['RunAB','RunC1','RunC2','RunD1','RunD2'] 
#Simply invoke ROOT wrapper macro using Popen
for run in eras:
    for hist in histos:
        Popen('root -b -l -q \'analyseYield.C(\"'+hist+'\",\"False\",\"'+run+'\")\'',shell=True).wait()

Once this is understood, one can run the macro. To produce plots like 35-37 use the analyse.py macro, which calls the wrapper for TSelector for the DY analysis (as described above):

mkdir runfolder
python analyseYield_mc.py
python analyseYield_data.py

Important information about the reweightings. Pileup reweighing is accessed from the ntuple, directly from the branch on a per event basis. The FEWZ weights are extracted from theoretical calculation, and are provided as arrays inside the efficiencyWeightToBin2012.C file located in the same directory (or any other directory, as long as there is an appropriate include in the header of the TSelector). The FEWZ weights are looked up based on the GEN mass as follows inside the code, only for signal MC:

//look up FEWZ weight

FEWZ_WEIGHT = weight(genDiMuPt, fabs(genRapidity), genMass, true);

To Finally, the Rochester momentum scale correction recipe is described here: http://www-cdf.fnal.gov/~jyhan/cms_momscl/cms_rochcor_manual.html

Few words about the normalization. The data events are not renormalized. The MC weights are weighted according to the probability of each event to be observed in a real collision event and according to the number of events generated in the sample. Therefore  

Event_weight ~ (Cross section x filter efficiency)/(Number of generated events)

For better accuracy we use the number of events actually ran on, rather than the number generated. We calculate it in the event loop, and apply it in the EventSelector::Terminate() method. In both the 7 and 8 TeV analysis, we normalized the MC tack (signal and backgrounds) to the number of events in data in the Z peak region (before the efficiency corrections). A special post-processing macro takes care of this:

python postprocessor.py
cp runfolder/stack* ../Inputs/rawYield

This python script adds up individual ROOT files with hadd and invokes ROOT macros parser.C and parser_2D.C which has a method for normalization of MC stack to data in the Z peak region.

After that, switch to the Dielectron working directory and produce necessary yield histograms before continuing with the style plotting.

XX

 

 

After that, the style macro is used to plot the publication quality plots.

cd ../style/DY
root -l plot.C

the style macro is used This would plot the 1D yields distribution (the switch between the electrons and muons is done manually inside the macro by adjusting the paths).

To plot the 2D distributions do:

root -l ControlPlots_2D.C

Step 4: Acceptance and Efficiency estimation

Completeness: 100%

Another constituent of the cross-section measurement is the acceptance-efficiency.

  • Acceptance is determined using GEN level information

To be able to produce the acceptance and efficiency one needs to change to a different folder, and run a different TSelector. But the general flow TSelector->ROOT wrapper->python wrapper is almost the same:

cd $DYWorkDir/AccEffMCtruth
python analyseMCtruth.py

The script will produce the root file with histograms corresponding to the mass and rapidity spectra after the acceptance cuts, selection cuts or both which are then used to calculate the acceptances, efficiencies and acceptance-efficiency products with and without pileup and FEWZ reweighing by executing:

root -l plotMCtruth.C
root -l plotMCtruth_2D.C

To get the corresponding distributions in the electron channel change to XX

 

 

The macro output a root file starting with out1* or out2* containing the histograms corresponding to the acceptance, efficiency and their product. To produce the publication level plots, the style macro described in the previous section needs to be used again

cd ../style/DY
root -l plot.C

To get the 2D plots do:

root -l plot_acc_2D.C

Step 5: Data-driven efficiency correction

Completeness: 15%

Only in the muon channel, the electron efficiency scale factors are obtained from the EGamma group, and not re-measured independently.

Next, the data-driven efficiency corrections are applied. This is done using the standard CMSSW recipe, so a lot of additional packages needs to be checked out. Follow this twiki: https://twiki.cern.ch/twiki/bin/viewauth/CMS/MuonTagAndProbe to set up your working area for the ntuple production (alternatively, one can use the trees already produced!)

  • The procedure goes in two steps: T&P tree production -> rerun seldom (ideally once), it depends only on the definitions of the tag and probe
cd TagAndProbe
cmsRun tp_from_aod_Data_newofficial.py
  • If you haven't produce TP trees you can always use the ready ones located there:

/store/user/asvyatko/DYstudy/TagProbeTrees/

  • fitting: separate job for trigger and all the muonID related efficiencies -> reran frequently and usually interactively (change binning, definitions)
cmsRun fitMuonID_data_all_2011.py
  • All the latest macros/configs can be found here: UserCode/ASvyatkovskiy/TagAndProbe
  • Isolation: RandomCone - currently, code is private and not possible to use.

After familiarizing yourself with the TagAndProbe package, you need to produce the muon efficiencies as a function of pT and eta. You do not need this in the analysis, but rather to understand if everything you are doing is correct. After you are done with that, produce the 2D efficiency pT-eta map (it is alredy produced in one go when running fiMuonID.py). To do that use the simple root macros (adjust i/o, not user friendly yet!):

root -l idDataMC.C
root -l triggerMuonDataMC.C

And to produce 2D efficiency maps and correction factors do:

 root -l perBinTable.C

The final step here is to produce the efficiency as function of invariant mass and the efficiency correction factor as a function of invariant mass.

Step 6: Background estimation

QCD data driven background estimation

In 8 TeV analysis, the main method to estimate the QCD background in the dimuon channel is the ABCD method (the fake-rate method is used in the electron channel). Before starting, let me summarize the ABCD method in a nutshell:

ABCD method

1) choose 2 variables: assume two variables are independent 

2) assume the fraction should be same if there is no correlation: N_A / N_B = N_C / N_D

3) In our study, use two variables: sign of muon pair, muon isolation

4) QCD fraction in each region has a dependence. We produce the correction factor for each region: B, C, D

5) Produce N_B, N_C, N_D from data sample, and estimate N_A from them at the end (applying the correction factors)

Now, let's go step by step.

First, change to the ABCD folder:

cd $DYWorkDir/ABCDmethod

The procedure consists of few steps and is guided by the wrapper.py script located inside the folder:

Popen("python QCDFrac_p1.py",shell=True).wait()
Popen("python qcdFracHadder.py rootfiles",shell=True).wait()
Popen("python ABCD2vari_init.py",shell=True).wait()
Popen("python ABCD2vari_p1.py",shell=True).wait()

Thus, for each of the MC samples and for the real data a set of sequences is ran. First the QCDFrac_*.py, which invoke the EventSelector_Bkg.C TSelector class for various values of charge and isolation (the variables defining the signal and background regions), based on the histograms filled, the coefficients are calculated. Second, the qcdFracHadder.py scripts is ran on the on the output of the first step. It is a utility script which repacks the histograms in an appropriate format. Third, the ABCD2vari_init.py script which actually performs the etiolation of ABCD coefficients in each region. Finally, the ABCD2vari_*.py scripts invoke the EventSelector_Bkg2.C TSelector class, passing the ABCD coefficients as TObjString objects inside the macro.

The post-processing and the output harvesting step is performed by the following python script: 

python abcdPostprocessor.py

It uses the output of the second TSelector as an inout, hadds it and produces a root file with th histogram which is then used in the analysis.

E-mu data-driven background estimation method

To estimate all the non-QCD backgrounds we employ the so-called e-mu data driven background estimation method. The same method is applied in the muon and electron channels. The code used for that purpose was originally adapted from Manny and it uses the so-called Bambu workflow. First, let's change into the e-mu working directory:

cd $DYWorkDir/EmuBackground

First, reduced ntuples are generated from the original Bambu ntuples:

root -l (shared libraries should compile if they have not already done so)
root [0] .L selectEmuEvents.C+
root [1] selectEmuEvents("../config_files/data_emu.conf")
root [2].q

One will have to edit the data_emu.conf to point to the local ntuples before running. After running this step, the reduced ntuples should be output to a directory (../root_files/selected_events/DY/ntuples/EMU/). One would also need to run selectEvents.C to generate reduced electron ntuples.These ntuples must contain two branches, mass (dilepton invariant mass) and weight. Sfter this is done, the e-mu macro can be ran:

#compile code
> gmake eMuBkgExe
#This should produce the binary eMuBkgExe. There are many options to run it. See the the possible options below
./eMuBkgExe #run emu method for 1D analysis and produce plots
./eMuBkgExe --doDMDY #run 2D analysis and produce plots
./eMuBkgExe --doDMDY --saveRootFile #same as above but output ROOT file with yield, statistical and systematic info as true2eBkgDataPoints.root

After this step, to produce a final root file with histograms, one can run the following script

root -l calculateEMu.C
root -l calculateEMu_2D.C

Step 7: Unfolding

Unfolding is applied to correct for migration of entries between bins caused by mass resolution effects (FSR correction is taken into account as a separate step).  For use in the Drell-Yan analysis, the choice for unfolding is matrix inversion. Provides a common interface between channels for symmetry and ease in combination and systematic studies.

To do any unfolding with MC, this requires 3 things:

  • Producing the response matrix
  • Making the histogram of measured events
  • Making the true histogram (clearly not used/available when unfolding data)

First, one can do some exercise, for that use script that demonstrates how the unfolding/fore-folding object works.

cvs co UserCode/kypreos/drellYan2010/unfolding
cd UserCode/kypreos/drellYan2010/unfolding/
source setup.sh
root -l test/testExpo.C++

To get back the pulls:

root -l test/testPulls.C++

The macros in the note are produced with the following:

 cvs co /UserCode/Purdue/DYAnalysis/Unfolding

1. To rpoduce the response matrix:

root -l unfoldingObs.C

2. To produce the unfolded yield plot do

root -l yield.C

Checkpoint7 with this macros one should be able to reproduce the plot 49-50 from the note and Tables 17-18 (note, the table 18 uses the background yield result from the background section)

Step 8: FSR correction

The effect of FSR is manifested by photon emission off the final state muon. It leads to change of the dimuon invariant mass and as a result a dimuon has invariant mass distinct from the propagator (or Z/gamma*) mass.

For our analysis we estimate the effect of FSR and the corresponding correction by estimating the bin-by-bin correction in invariant mass bins. Which is done by comparing the pre-FSR and the post-FSR spectra. The pre-FSR spectrum can be obtained by requiring mother of muon to be Z/gamma*, post FSR spectrum is when the mother is whatever.. The corresponding plots in the note are: 52-55 they all can be calculated with the information avaialble in the ntuple using

cd $CONTROL_PLOTS_DIR
root -l InvMassFSR.C++

To get the FSR histograms one needs to turno on calculateFSR flag on.

Checkpoint: this macro will allow one to get plots 52-55 from the note

Step 9: Systematic uncertainty estimation

There are various sources of systematics affecting our analysis: the PDF, theoretical modeling uncertainty, efficiency estimation uncertainty, background estimation, unfolding etc.

For the background estimation, with the data driven method we estimate the systematic uncertainty as the difference between the result obtained with the method and that

expected from MC per mass bin. Corresponding numbers are obtained with the  emu_prediction_plots.py

macro (see the recipe in the step 6 section).

PDF uncertainty estimation. The recipe for the method currently used (step by step).
Reweight the PDF using the current existing MC samples as implemented in CMSSW. First, check out the necessary packages:

scramv1 p CMSSW CMSSW_4_2_3
cvsco -r CMSSW_4_2_3 ElectroWeakAnalysis/Utilities

then replace the LHAPDF library as described here to the current up-to-date one:
/afs/cern.ch/cms/slc5_amd64_gcc434/external/lhapdf/5.8.5-cms2/share/lhapdf/PDFsets
or you can directly change in:
CMSSW_4_2_3/config/toolbox/slc5_amd64_gcc434/tools/available/lhapdffull.xml
with above path:

touch $CMSSW_BASE/src/ElectroWeakAnalysis/Utilities/BuildFile.xml
cmsenv
scramv1 b
cd ElectroWeakAnalysis/Utilities/test

then change the input file in PdfSystematicsAnalyzer.py and run:

cmsRun PdfSystematicsAnalyzer.py

With the up-to-date LHAPDF, one can use CT10, MSTW2008*, CTEQ66, NNPDF2.0, and other PDF sets.

Efficiency estimation uncertainty. The current method for efficiency estimation in the DY analysis is following: we estimate the MC truth efficiency and then we apply the efficiency correction map (Pt-eta) extracted using the data-driven tag and probe method applied to data and MC to weight the MC events. The systematic uncertainty associated with the Tag-and-Probe efficiency estimation is due to line-shape modelling, the difference between fit and counting and due to the binning. The two first are calculated inside the macros described in Step5. The binning systematic uncertainty is estimated using the following macro:

UserCode/Purdue/DYAnalysis/AnalysisMacros/Correction/correctionMass_systematics.C

it takes as input the root files having the histogram with efficiency correction as a function of invariant mass with two binnings (to estimate the binning uncertainty), the other sources of uncertainty are also accessed.

Step 10: Plotting the results

The main result of the measurement is the cross-section ratio or r (and R) shape. We distinguish R and r shapes (see the note chapter9 for details on the definition and also see Figures 64). The figure 64 shows the shape R for theory and measurement (for two independent trigger scenarios). It relies on the theoretical cross-section measurement (1-2GeV bin), the final numbers for acceptance correction and also the final numbers for cross-section measurement. To give a clearer feeling of what this plot depends on I name the tables that are used to produce the number in the plot 64:

Table 21-24: Theoretical predictions

Tables 25-26: Measurement

Table 5-10: Acceptance-efficiency corrections

To run the code one simply needs:

cd $CMSSW_RELEASE_BASE/src
cvs co UserCode/Purdue/DYAnalysis/AnalysisMacros/GautierMacro
cp UserCode/Purdue/DYAnalysis/AnalysisMacros/GautirMacro/* $CONTROL_PLOTS_DIR
cd $CONTROL_PLOTS_DIR
root -l theory_plot.C++

Use Gautier style macros to get the same plots with different style:

root -l DY.C
root -l plot.C

To get all the up to date values for the shape r/R use:

cvs co UserCode/Purdue/DYAnalysis/AnalysisMacros/ShapeR./shapeDY.make
./shapeDY

Among the requirements to style of the results presented is to put the measurement point to the weighted position (i.e. the location of the point inside the bin makes the integral over sub-bins equal from both sides). The following macro can be used to calculate these positions do in root:

.L compare_r.cc;
compare_r();

A lot of intersting information can be retrieved from the Zprime JTERM SHORT and LONG exercises (which are constructed along the same lines as this tutorial).

  • No labels