//description to be added - I will ask Adam. But actually we switched to Purdue ntuples so this section is obsolete....
Second step: make the cross section maps.
The submission directory should not exist, otherwise you get an error.
Note: you obviously do not want to recalculate the weights, numbers of events for each stepwise DY sample. For Summer11 the numbers are already available here: /UserCode/Purdue/DYAnalysis/AcceptanceAdam/commands_XSEC.txt
But for Fall11 I will have to recalculate it and update the corresponding file.
Third step: make the weights
To make weights one needs to run following:
The weights referred here are the FEWZ-POWHEG cross section ratios per Pt-Y bin (see one of the Adam's presentations). What the script does is essentially following: it takes the POWHEG Pt-Y maps produced on the previous step, takes the FEWZ Pt-Y maps (which are produced by Alexey and usually latest are found here: https://twiki.cern.ch/twiki/bin/view/CMS/EWKDrellYan2011, the file starting with map2D_*.root), and just divides it bin by bin using the my_divide function (this is a clopper-pearson divide).
Complications: there is a set of parameters and input files which is configures inside the script.
First of all POWHEG input files:
## ADAM for NNLO #Get the files with POWHEG maps with fine Pt binning fileIn10fine=ROOT.TFile("DYM1020/xsec04w1_newFine/powheg_xs_full_04_2011.root") #coarse Pt binningfileIn10 = ROOT.TFile( "DYM1020/xsec04w1_new/powheg_xs_full_04_2011.root") #fine Pt binningfileIn20fine=ROOT.TFile("DYM20/xsec04w1_newFine/powheg_xs_full_04_2011.root") #coarse Pt binningfileIn20 = ROOT.TFile( "DYM20/xsec04w1_new/powheg_xs_full_04_2011.root") fileIn200 = ROOT.TFile( "DYM200/xsec04w1_new/powheg_xs_full_04_2011.root") fileIn500 = ROOT.TFile( "DYM500/xsec04w1_new/powheg_xs_full_04_2011.root") fileIn1000 = ROOT.TFile("DYM1000/xsec04w1_new/powheg_xs_full_04_2011.root")
There are multiple input files, because we use STEPWISE DY samples, they are split in generator mass. On the other hand, we fix the Pt-Y binning on the previous step, so that if we decide to play around with binning we need to provide a different input file (you can see fileIn10fine and fileIn10 for instance). The parameter finePtBins controls the number of first mass bins which will use fine binning.
Secondly, FEWZ input files: this are also hardcode inside the script, because multiple versions exist (differing by Vegas integration precision and NNLO/NLO order). The parameter nnloInBins controls the amount of first mass bins in which we have NNLO.
The output of the script is DYMoutput/weights_stepwise_precision10-5_fine12.root - the file with histograms with weights and errors.
Fourth step: make the corrected acceptance distributions (finally!!!)
Inspect the countcorracc.py script in the directory. What it does is following:
To test the script locally on few hundred of events do:
python countcorracc.py --r DYMoutput/fewz_powheg_weights_stepwise_2011_fine12.root --o fewz_powheg_corracc_2011 --n 5.78216 --s 3320.0 --e 9630633.0 --l 15 --h 20 --sg DYM1020/a2_2011/AcceptanceFromPAT_2011_mapcfg-pat_9_1_WaI/zp2mu_histos.root
Note that again one has to run a separate job on each of the stepwise DY samples. And the weights, event numbers and cross section passed as an argument would have to be different for each sample.
To submit a batch job do:
cd /home/ba01/u112/aeverett/scratch_rcac/20112011/DYM1020 farmoutAnalysisJobsPyList.sh bob $CMSSW_BASE /home/ba01/u112/aeverett/scratch_rcac/20112011/countcorracc.py --skip-srmcp --submit-dir=$PWD/corracc_stepwise_coarse7b --input-files-per-job=1 --assume-input-files-exist --input-file-list=$PWD/inFileList.txt --input-dir=$PWD/ --python-arguments=" --r /home/ba01/u112/aeverett/scratch_rcac/20112011/DYMoutput/fewz_powheg_weights_stepwise_2011_coarse7.root --o fewz_powheg_corracc_2011 --n 5.78216 --s 3320.0 --e 9630633.0 --l 15 --h 20"
The tricky part about this script is:
//this is still old
One needs to configure pT and eta cuts for each muon in the script. This is done very easily in two places: