Skip to content

Making the Menu_HLT.py file (STEAM maps)

dbeghin edited this page Jan 11, 2021 · 2 revisions

Much of the power of the STEAM rates estimation tool is in its ability to give rates for datasets/groups/streams, and calculate the total "physics" rate. But for these rates to be accurate, the maps found in the Menu_HLT.py file need to be up-to-date.

DISCLAIMER

To produce new maps, we can use scripts developed by Nadir many years ago. They had originally a much wider scope, allowing you to keep track of whether HLT paths could be prescaled, what were their target rates, and whether the rates needed to be flat as a function of the instantaneous luminosity, etc. There is even an automated tool to produce suggested prescales adapted to the target rate of each HLT path.

However, we will only use these tools to generate the STEAM maps. We'll treat the tools mostly as black boxes since Nadir is no longer in physics and not available to comment the code in detail.

Overview of the map generation workflow

  1. Get HLT menu information from confdb.

  2. Review the information and update manually if necessary.

  3. Generate the Menu_HLT.py file.

Get HLT menu information from confdb

The CMS environment needs to be active:

cmsenv at the top of your CMSSW release.

Then:

cd SteamRatesEdmWorkflow/STEAM_maps

and create a file called hlt.py with confdb info:

hltConfigFromDB --configName /dev/CMSSW_11_1_0/GRun/V11 > hlt.py

(replace /dev/CMSSW_11_1_0/GRun/V11 with the name of your menu).

Generate a first csv table using the hlt.py file you just created:

./hltDumpStream --csv --clean hlt.py > step1.log

This will create a table called outputfile.csv.

Review the information and update manually if necessary

You can rename the outputfile.csv table (e.g. using the name of the corresponding HLT menu) so you can better keep track of it later. Import the table into a spreadsheet (remember to specify that the separator is ";"). The table should have the following columns: "stream", "dataset", "path", "group", "type", "rate", then a bunch of HLT prescale columns, and the last column should be the L1 seeds. The information in the table is accurate except for the "group" and "type" information (we don't care about "rate" here).

The "group" column tells us to which POG/PAG/other the HLT path belongs to. If you want this information to be up-to-date, you need to ask the POG/PAG trigger contacts to review the table and check that their group is assigned the correct paths.

The "type" of the HLT path can be "signal", "control", "backup" or "safety" according to how the path is used. This is useful mostly to know if it's acceptable to prescale the path, e.g. a signal path shouldn't be prescaled. You may or may not want to keep tracking this information, the code will still work even if the "type" information is wrong. If you do want to track the types, you'll probably need help from the POG/PAG contacts again.

Generate the Menu_HLT.py file

Once you're satisfied with the HLT menu table in your spreadsheet, export it as a tsv (NOT csv) file. Then run the makeMaps.py script. If it's been a while since you last ran the script, first open it and check that the column information is correct:

nCol = 36 is the total number of columns in your tsv file.

iPath = 2 is the column number for the HLT path information (start counting at 0, from the left).

iGroup = 3 is the column number for the group information (start counting at 0, from the left).

iType = 4 is the column number for the path type (start counting at 0, from the left).

iStatus = -1 iEnable = -1 iTarget = -1 iFlatness = -1 are all outdated column names. Set a number < 0 to disable them.

iStream = 0 is the column number for the stream information (start counting at 0, from the left).

iDataset = 1 is the column number for the dataset information (start counting at 0, from the left).

Then close and run the script:

python makeMaps.py table_name.tsv > makeMaps.log

This produces an output python file called SteamDB.py. Open it to check that everything makes sense (e.g. check that datasets appear in the dataset map, and names of POGs and PAGs in the group map). Then add to the beginning of the file the dataset->stream map. For now there's no automated tool to create this map, but you can copy the following lines (update if necessary):

datasetStreamMap = {

    'DoubleEG'       : 'PhysicsEGamma',
    'SingleElectron' : 'PhysicsEGamma',
    'SinglePhoton'   : 'PhysicsEGamma',
    'EGamma'         : 'PhysicsEGamma',
    'BTagCSV'        : 'PhysicsHadronsTaus',
    'BTagMu'         : 'PhysicsHadronsTaus',
    'DisplacedJet'   : 'PhysicsHadronsTaus',
    'HTMHT'          : 'PhysicsHadronsTaus',
    'JetHT'          : 'PhysicsHadronsTaus',
    'MET'            : 'PhysicsHadronsTaus',
    'Tau'            : 'PhysicsHadronsTaus',
    'Charmonium'     : 'PhysicsMuons',
    'DoubleMuon'     : 'PhysicsMuons',
    'DoubleMuonLowMass': 'PhysicsMuons',
    'SingleMuon'     : 'PhysicsMuons',
    'MuOnia'         : 'PhysicsMuons',
    'MuonEG'         : 'PhysicsMuons',
    'ParkingScoutingMonitor': 'PhysicsParkingScoutingMonitor',
    'ScoutingCaloCommissioning': 'ScoutingCaloMuon',
    'ScoutingCaloHT' : 'ScoutingCaloMuon',
    'ScoutingCaloMuon': 'ScoutingCaloMuon',
    'ScoutingPFCommissioning': 'ScoutingPF',
    'ScoutingPFHT'   : 'ScoutingPF',
    'Commissioning'  : 'PhysicsCommissioning',
    'HLTPhysics'     : 'PhysicsCommissioning',
    'HCalNZS'        : 'PhysicsCommissioning',
    'HighPtLowerPhotons': 'PhysicsCommissioning',
    'HighPtPhoton30AndZ': 'PhysicsCommissioning',
    'IsolatedBunch'  : 'PhysicsCommissioning',
    'NoBPTX'         : 'PhysicsCommissioning',
    'ZeroBias'        : 'PhysicsCommissioning'

}

Now you just need to rename SteamDB.py into Menu_HLT.py and copy it into the Rates directory. You can make a copy of the old Menu_HLT.py file to be prudent.