# A detailed map of Higgs boson interactions by the ATLAS experiment ten years after the discovery – Nature

Jul 4, 2022

### Experimental set-up

The ATLAS detector12 consists of an inner tracking detector surrounded by a thin superconducting solenoid, electromagnetic and hadron calorimeters, and a muon spectrometer incorporating three large superconducting air-core toroidal magnets.

ATLAS uses a right-handed coordinate system with its origin at the nominal interaction point in the centre of the detector and the z axis along the beam pipe. The x axis points from the interaction point to the centre of the LHC ring, and the y axis points upwards. Cylindrical coordinates (r, ϕ) are used in the transverse plane, ϕ being the azimuthal angle around the z axis. The pseudorapidity is defined in terms of the polar angle θ as η = −ln(tan(θ/2)).

The inner-detector (ID) system is immersed in a 2-T axial magnetic field and provides charged-particle tracking in the range |η| < 2.5. The high-granularity silicon pixel detector covers the vertex region and typically provides four measurements per track, the first hit normally being in the insertable B-layer (IBL) installed before Run 260,61. It is followed by the silicon microstrip tracker (SCT), which usually provides eight measurements per track. These silicon detectors are complemented by the transition radiation tracker (TRT), which enables radially extended track reconstruction up to |η| < 2.0. The TRT also provides electron identification information based on the fraction of hits (typically 30 in total) above a higher energy-deposit threshold corresponding to transition radiation.

The calorimeter system covers the pseudorapidity range |η| < 4.9. Within the region |η| < 3.2, electromagnetic calorimetry is provided by barrel and endcap high-granularity lead/liquid-argon (LAr) calorimeters, with an additional thin LAr presampler covering |η| < 1.8 to correct for energy loss in material upstream of the calorimeters. Hadron calorimetry is provided by the steel/scintillator-tile calorimeter, segmented into three barrel structures within |η| < 1.7, and two copper/LAr hadron endcap calorimeters. The solid angle coverage is completed with forward copper/LAr and tungsten/LAr calorimeter modules optimized for electromagnetic and hadronic energy measurements, respectively.

The muon spectrometer (MS) comprises separate trigger and high-precision tracking chambers measuring the deflection of muons in a magnetic field generated by the superconducting air-core toroidal magnets. The field integral of the toroids ranges between 2.0 and 6.0 Tm across most of the detector. Three layers of precision chambers, each consisting of layers of monitored drift tubes, covers the region |η| < 2.7, complemented by cathode-strip chambers in the forward region, where the background is highest. The muon trigger system covers the range |η| < 2.4 with resistive-plate chambers in the barrel, and thin-gap chambers in the endcap regions.

The performance of the vertex and track reconstruction in the inner detector, the calorimeter resolution in electromagnetic and hadronic calorimeters and the muon momentum resolution provided by the muon spectrometer are given previously12.

Interesting events are selected by the first-level trigger system implemented in custom hardware, followed by selections made by algorithms implemented in software in the high-level trigger62. The first-level trigger accepts events from the 40-MHz bunch crossings at a rate below 100 kHz, which the high-level trigger further reduces in order to record events to disk at about 1 kHz.

### Statistical framework

The results of the combination presented in this paper are obtained from a likelihood function defined as the product of the likelihoods of each input measurement. The observed yield in each category of reconstructed events follows a Poisson distribution the parameter of which is the sum of the expected signal and background contributions. The number of signal events in any category k is split into the different production and decay modes:

$${n}_{k}^{{\rm{signal}}}={{\mathcal{L}}}_{k}\sum _{i}\sum _{f}({\sigma }_{i}{B}_{f}){(A\epsilon )}_{if}^{k},$$

where the sum indexed by i runs either over the production processes (ggF, VBF, WH, ZH, $$t\bar{t}H$$, tH) or over the set of the measured production kinematic regions, and the sum indexed by f runs over the decay final states (ZZ, WW, γγ, , $$b\bar{b}$$, $$c\bar{c}$$, τ+τ, μ+μ). The quantity $${ {\mathcal L} }_{k}$$ is the integrated luminosity of the dataset used in category k, and $${(A\epsilon )}_{if}^{k}$$ is the acceptance times selection efficiency factor for production process i and decay mode f in category k. Acceptances and efficiencies are obtained from the simulation (corrected by calibration measurements in control data for the efficiencies). Their values are subject to variations due to experimental and theoretical systematic uncertainties. The cross-sections σi and branching fractions Bf are the parameters of interest of the model. Depending on the model being tested, they are either free parameters, set to their standard model prediction or parameterized as functions of other parameters. All cross-sections are defined in the Higgs boson rapidity range |yH| < 2.5, which is related to the polar angle of the Higgs boson’s momentum in the detector and corresponds approximately to the region of experimental sensitivity.

The impact of experimental and theoretical systematic uncertainties on the predicted signal and background yields is taken into account by nuisance parameters included in the likelihood function. The predicted signal yields from each production process, the branching fractions and the signal acceptance in each analysis category are affected by theory uncertainties. The combined likelihood function is therefore expressed as:

$$L({\boldsymbol{\alpha }},{\boldsymbol{\theta }},{\rm{d}}{\rm{a}}{\rm{t}}{\rm{a}})=\prod _{k\in {\rm{c}}{\rm{a}}{\rm{t}}}\prod _{b\in {\rm{b}}{\rm{i}}{\rm{n}}{\rm{s}}}P({n}_{k,b}|{n}_{k,b}^{{\rm{s}}{\rm{i}}{\rm{g}}{\rm{n}}{\rm{a}}{\rm{l}}}({\boldsymbol{\alpha }},{\boldsymbol{\theta }})+{n}_{k,b}^{{\rm{b}}{\rm{k}}{\rm{g}}}({\boldsymbol{\theta }}))\prod _{\theta \in {\boldsymbol{\theta }}}G(\theta ),$$

where nk,b, $${n}_{k,b}^{{\rm{signal}}}$$ and $${n}_{k,b}^{{\rm{bkg}}}$$ stand for the number of observed events, the number of expected signal events and the number of expected background events in bin b of analysis category k, respectively. The parameters of interest are noted α, the nuisance parameters are θ, P represents the Poisson distribution, and G stands for Gaussian constraint terms assigned to the nuisance parameters. Some nuisance parameters are meant to be determined by data alone and do not have any associated constraint term. This is, for instance, the case for background normalization factors that are fitted in control categories. The effects of nuisance parameters affecting the normalizations of signal and backgrounds in a given category are generally implemented using the multiplicative expression:

$$n(\theta )={n}^{0}{(1+\sigma )}^{\theta },$$

where n0 is the nominal expected yield of either signal or background and σ the value of the uncertainty. This ensures that n(θ) > 0 even for negative values of θ. For the majority of nuisance parameters, including all those affecting the shapes of the distributions, a linear expression is used instead on each bin of the distributions:

$$n(\theta )={n}^{0}(1+\sigma \theta ).$$

The systematic uncertainties are broken down into independent underlying sources, so that when a source affects multiple or all analyses the associated nuisance parameter can be fully correlated across the terms in the likelihood corresponding to these analyses by using common nuisance parameters. This is the case of systematic uncertainties in the luminosity measurement63, in the reconstruction and selection efficiencies64,65,66,67,68,69,70 and in the calibrations of the energy measurements71,72,73,74. Their effects are propagated coherently by using common nuisance parameters whenever applicable. Only a few components of the systematic uncertainties are correlated between the analyses performed using the full Run 2 data and those using only the 2015 and 2016 data, owing to differences in their assessment, in the reconstruction algorithms and in software releases. Systematic uncertainties associated with the modelling of background processes, as well as uncertainties due to the limited number of simulated events used to estimate the expected signal and background yields, are treated as being uncorrelated between analyses.

Uncertainties in the parton distribution functions are implemented coherently in all input measurements and all analysis categories75. Uncertainties in modelling the parton showering into jets of particles affect the signal acceptances and efficiencies, and are common to all input measurements within a given production process. Similarly, uncertainties due to missing higher-order quantum chromodynamics (QCD) corrections are common to a given production process. Their implementation in the kinematic regions of the simplified template cross-sections framework results in a total of 66 uncertainty sources, where overall acceptance effects are separated from migrations between the various bins (for example, between jet multiplicity regions or between dijet invariant mass regions)76. Both the acceptance and signal yield uncertainties affect the signal strength modifier and coupling strength modifier results, which rely on comparisons of measured and expected yields. Only acceptance uncertainties affect the cross-section and branching fraction results. The uncertainties in the Higgs boson branching fractions due to dependencies on standard model parameter values (such as b and c quark masses) and missing higher-order effects are implemented using the correlation model described previously44.

In total, over 2,600 sources of systematic uncertainty are included in the combined likelihood. For most of the presented measurements, the systematic uncertainty is expected to be of similar size or somewhat smaller than the corresponding statistical uncertainty. The systematic uncertainties are dominant for the parameters that are measured the most precisely, that is, the global signal strength and the production cross-sections for the ggF and VBF processes. The expected systematic uncertainty of the global signal strength measurement (about 5%) is larger than the statistical uncertainty (3%), with similar contributions from the theory uncertainties in signal (4%) and background modelling (1.7%), and from the experimental systematic uncertainty (3%). The latter is predominantly composed of the uncertainty in the luminosity measurement (1.7%), followed by the uncertainties in electron, jet and b-jet reconstruction, data-driven background modelling, as well as from the limited number of simulated events (about 1% each). All other sources of experimental uncertainty combined contribute an additional 1%. The systematic uncertainty in the production cross-section of the ggF process is dominated by experimental uncertainties (3.5%) followed by signal theory uncertainties (3%), compared to a statistical uncertainty of 4%. For the VBF process, where the statistical uncertainty is 8%, the experimental uncertainties are estimated to be 5%, and the signal theory uncertainties add up to 7%. Systematic uncertainties are also dominant over the statistical uncertainties in the measurements of the branching fractions into W pairs and τ lepton pairs.

Measurements of the parameters of interest use a statistical test based on the profile likelihood ratio52:

$$\varLambda ({\boldsymbol{\alpha }})=\frac{L({\boldsymbol{\alpha }},\hat{\hat{{\boldsymbol{\theta }}}}({\boldsymbol{\alpha }}))}{L(\hat{{\boldsymbol{\alpha }}},\hat{{\boldsymbol{\theta }}})},$$

where α are the parameters of interest and θ are the nuisance parameters. The $$\hat{\hat{{\boldsymbol{\theta }}}}({\boldsymbol{\alpha }})$$ notation indicates that the nuisance parameters values are those that maximize the likelihood for given values of the parameters of interest. In the denominator, both the parameters of interest and the nuisance parameters are set to the values ($$\hat{{\boldsymbol{\alpha }}}$$, $$\hat{{\boldsymbol{\theta }}}$$) that unconditionally maximize the likelihood. The estimates of the parameters α are these values $$\hat{{\boldsymbol{\alpha }}}$$ that maximize the likelihood ratio.

Owing to the usually large number of events selected in the measurements, all results presented in this paper are obtained in the asymptotic regime where the likelihood approximately follows a Gaussian distribution. It was checked in previous iterations of the individual input measurements, for instance ref. 77, that this assumption also holds in cases with low event counts by comparing the results of the asymptotic formulae with those of pseudo-experiments. This confirmed the results from a previous work52 that the Gaussian approximation becomes valid for as few as 5 background events. In the asymptotic regime twice the negative logarithm of the profile likelihood λ(α) = −2ln(Λ(α)) follows a χ2 distribution with a number of degrees of freedom equal to the number of parameters of interest. Confidence intervals for a given confidence level (CL), usually 68%, are then defined as the regions fulfilling $$\lambda ({\boldsymbol{\alpha }}) < {F}_{n}^{-1}({\rm{C}}{\rm{L}})$$ where $${F}_{n}^{-1}$$ is the quantile function of the χ2 distribution with n degrees of freedom, so $${F}_{1}^{-1}=1\,(4)$$ for a 1σ (2σ) CL with one degree of freedom. The values of the parameters α corresponding to these confidence intervals are obtained by scanning the profile likelihood. Similarly, the p value pSM = 1 − Fn(λ(αSM)) is used to test the compatibility of the measurement and the standard model prediction. The correlations between the parameters are estimated by inverting the matrix of the second derivatives of the likelihood.

The expected significances and limits are determined using the ‘Asimov’ datasets52, which are obtained by setting the observed yields to their expected values when the nuisance parameters are set to the values that maximize the likelihood $$\hat{{\rm{\theta }}}$$.

### Parameterization within the κ framework

Within the κ framework, the cross-section for an individual measurement is parameterized as

$$\sigma (i\to H\to f)={\sigma }_{i}{B}_{f}=\frac{{\sigma }_{i}({\boldsymbol{\kappa }}){\varGamma }_{f}({\boldsymbol{\kappa }})}{{\varGamma }_{H}({\boldsymbol{\kappa }},{B}_{{\rm{inv.}}},{B}_{{\rm{u.}}})},$$

where Γf is the partial width for a Higgs boson decay to the final state f and ΓH is the total decay width of the Higgs boson. The total width is given by the sum of the partial widths of all the decay modes included. Contributions to the total Higgs boson decay width owing to phenomena beyond the standard model may manifest themselves as a value of coupling strength modifier κp differing from one, or a value of Binv. or Bu. differing from zero. The Higgs boson total width is then expressed as $${\varGamma }_{H}({\boldsymbol{\kappa }},{B}_{{\rm{inv.}}},{B}_{{\rm{u.}}})={\kappa }_{H}^{2}({\boldsymbol{\kappa }},{B}_{{\rm{inv.}}},{B}_{{\rm{u.}}}){\varGamma }_{H}^{{\rm{SM}}}$$ with

$${\kappa }_{H}^{2}({\boldsymbol{\kappa }},{B}_{{\rm{inv.}}},{B}_{{\rm{u.}}})=\frac{{\sum }_{p}{B}_{p}^{{\rm{SM}}}{\kappa }_{p}^{2}}{(1-{B}_{{\rm{inv.}}}-{B}_{{\rm{u.}}})}.$$

Higgs boson production cross-sections and partial and total decay widths are parameterized in terms of the coupling strength modifiers as shown in table 9 of ref. 22. An improved parameterization including additional sub-leading contributions is used in this paper to match the increased precision of the measurements.

### Kinematic regions probing Higgs boson production

The definitions of kinematic regions for the precision study of Higgs boson production in the framework of simplified template cross-sections44,56,57,58 are based on the predicted properties of particles generated in a given production process. The partitioning follows the so-called Stage-1.2 scheme, which features a slightly finer granularity than the Stage-1.1 scheme57 and introduces the Higgs boson transverse momentum categories for the $$t\bar{t}H$$ production process. Higgs bosons are required to be produced with rapidity |yH| < 2.5. Associated jets of particles are constructed from all stable particles with a lifetime greater than 10 ps, excluding the decay products of the Higgs boson and leptons from W and Z boson decays, using the anti-kt algorithm78 with a jet radius parameter R = 0.4, and must have a transverse momentum pT,jet > 30 GeV. Standard model predictions are assumed for the kinematic properties of Higgs boson decays. Phenomena beyond the standard model can substantially modify these properties, and thus the acceptance of the signal, especially for the WW or ZZ decay modes, and this should be considered when using these measurements for the relevant interpretations.

Higgs boson production is first classified according to the nature of the initial state and the associated particles, the latter including the decay products of W and Z bosons if they are present. These classes are: $$t\bar{t}H$$ and tH processes; qq′ → Hqq′ processes, with contributions from both VBF and quark-initiated VH (where V = W, Z) production with a hadronic decay of the vector boson; pp → VH production with a leptonic decay of the vector boson (V(ℓℓ, ℓν)H), including gg → ZH → ℓℓH production; and finally the ggF process combined with $$gg\to ZH\to q\bar{q}H$$ production to form a single gg → H process. The contribution of the $$b\bar{b}H$$ production process is taken into account as a 1%44 increase of the gg → H yield in each kinematic region, because the acceptances for both processes are similar for all input analyses44.

The input measurements in individual decay modes provide only limited sensitivity to the cross-section in some of the regions of the Stage-1.2 scheme, mainly because of the small number of events in some of these regions. In other cases, they only provide sensitivity to a combination of these regions, leading to strongly correlated measurements. To mitigate these effects, some of the Stage-1.2 kinematic regions were merged for the combined measurement.

Compared to individual input measurements, systematic theory uncertainties associated with the signal predictions have been updated for the combination to closely follow the granularity of the Stage-1.2 scheme. The QCD scale uncertainties in ggF production were updated for all input channels that are sensitive to this production process. Out of 18 uncertainty sources in total, two account for overall fixed-order and resummation effects, two cover the migrations between different jet multiplicity bins, seven are associated with the modelling of the Higgs boson transverse momentum ($${p}_{{\rm{T}}}^{H}$$) in different phase-space regions, four account for the uncertainty in the distribution of the dijet invariant mass (mjj) variable, one covers the modelling of the Higgs boson plus two leading jets transverse momentum ($${p}_{{\rm{T}}}^{Hjj}$$) distribution in the ≥2-jet region, one pertains to modelling of the distribution of the Higgs boson plus one jet transverse momentum ($${p}_{{\rm{T}}}^{Hj}$$) divided by $${p}_{{\rm{T}}}^{H}$$ in the high-$${p}_{{\rm{T}}}^{H}$$ region, and finally, the last takes into account the uncertainty from the choice of top quark mass scheme. Theory uncertainties for the qq′ → Hqq′ and $$t\bar{t}H$$ processes are defined previously28, and those of the V(ℓℓ, ℓν)H kinematic region follow the scheme described in an earlier work76. For the kinematic regions defined by the merging of several Stage-1.2 regions, the signal acceptance factors are determined assuming that the relative fractions in each Stage-1.2 region are given by their standard model values, and the uncertainties predicted by the standard model in these fractions are taken into account.