Replies: 3 comments 23 replies
-
Hi James, there are a number of things that may have an impact here. As a first step, I would suggest comparing the runtime of a single maximum-likelihood estimate (MLE) fit. A few more thoughts:
When you say "categories", do you mean channels in If you are interested in another datapoint, you may also want to check out the implementation in |
Beta Was this translation helpful? Give feedback.
-
Just for some more information about the workspaces, here's the summary of
|
Beta Was this translation helpful? Give feedback.
-
Hi, there's been no update. @AlkaidCheng is this still an open discussion? Re-open/comment if so. |
Beta Was this translation helpful? Give feedback.
-
Hi!
In our ATLAS analysis, we have noticed a significance speed difference between our pyhf limit setting framework and another independent framework, and wanted your thoughts/advice. I'm a member of the HH → 4b analysis and one of results is a limit on HH production cross-section at various κλ (the Higgs self-coupling modifier) points. We have a number of categories in the analysis, the addition of which has slowed our pyhf-based limit setting framework down significantly. Put simply, we use a root finder and
pyhf.infer.hypotest
to find the μ corresponding to CLs = 0.05. For example, for κλ = 1, the time taken to get an expected limit on μ with error bands is ~25 hours 39 minutes. An alternative framework used by the HH combination team, quickstats, which uses ROOT, extracted the same limit in ~79 seconds. We found this difference very surprising! Please let us know your thoughts and if you want any more information.Cheers!
James
Beta Was this translation helpful? Give feedback.
All reactions