CP violationIn particle physics, CP violation is a violation of CP-symmetry (or charge conjugation parity symmetry): the combination of C-symmetry (charge symmetry) and P-symmetry (parity symmetry). CP-symmetry states that the laws of physics should be the same if a particle is interchanged with its antiparticle (C-symmetry) while its spatial coordinates are inverted ("mirror" or P-symmetry). The discovery of CP violation in 1964 in the decays of neutral kaons resulted in the Nobel Prize in Physics in 1980 for its discoverers James Cronin and Val Fitch.
Baryon asymmetryIn physical cosmology, the baryon asymmetry problem, also known as the matter asymmetry problem or the matter–antimatter asymmetry problem, is the observed imbalance in baryonic matter (the type of matter experienced in everyday life) and antibaryonic matter in the observable universe. Neither the standard model of particle physics nor the theory of general relativity provides a known explanation for why this should be so, and it is a natural assumption that the universe is neutral with all conserved charges.
Belle experimentThe Belle experiment was a particle physics experiment conducted by the Belle Collaboration, an international collaboration of more than 400 physicists and engineers, at the High Energy Accelerator Research Organisation (KEK) in Tsukuba, Ibaraki Prefecture, Japan. The experiment ran from 1999 to 2010. The Belle detector was located at the collision point of the asymmetric-energy electron–positron collider, KEKB.
BaryogenesisIn physical cosmology, baryogenesis (also known as baryosynthesis) is the physical process that is hypothesized to have taken place during the early universe to produce baryonic asymmetry, i.e. the imbalance of matter (baryons) and antimatter (antibaryons) in the observed universe. One of the outstanding problems in modern physics is the predominance of matter over antimatter in the universe. The universe, as a whole, seems to have a nonzero positive baryon number density.
T-symmetryT-symmetry or time reversal symmetry is the theoretical symmetry of physical laws under the transformation of time reversal, Since the second law of thermodynamics states that entropy increases as time flows toward the future, in general, the macroscopic universe does not show symmetry under time reversal. In other words, time is said to be non-symmetric, or asymmetric, except for special equilibrium states when the second law of thermodynamics predicts the time symmetry to hold.
Standard deviationIn statistics, the standard deviation is a measure of the amount of variation or dispersion of a set of values. A low standard deviation indicates that the values tend to be close to the mean (also called the expected value) of the set, while a high standard deviation indicates that the values are spread out over a wider range. Standard deviation may be abbreviated SD, and is most commonly represented in mathematical texts and equations by the lower case Greek letter σ (sigma), for the population standard deviation, or the Latin letter s, for the sample standard deviation.
KaonIn particle physics, a kaon (ˈkeɪ.ɒn), also called a K meson and denoted _Kaon, is any of a group of four mesons distinguished by a quantum number called strangeness. In the quark model they are understood to be bound states of a strange quark (or antiquark) and an up or down antiquark (or quark). Kaons have proved to be a copious source of information on the nature of fundamental interactions since their discovery in cosmic rays in 1947.
Cabibbo–Kobayashi–Maskawa matrixIn the Standard Model of particle physics, the Cabibbo–Kobayashi–Maskawa matrix, CKM matrix, quark mixing matrix, or KM matrix is a unitary matrix which contains information on the strength of the flavour-changing weak interaction. Technically, it specifies the mismatch of quantum states of quarks when they propagate freely and when they take part in the weak interactions. It is important in the understanding of CP violation.
Median absolute deviationIn statistics, the median absolute deviation (MAD) is a robust measure of the variability of a univariate sample of quantitative data. It can also refer to the population parameter that is estimated by the MAD calculated from a sample. For a univariate data set X1, X2, ..., Xn, the MAD is defined as the median of the absolute deviations from the data's median : that is, starting with the residuals (deviations) from the data's median, the MAD is the median of their absolute values. Consider the data (1, 1, 2, 2, 4, 6, 9).
Weak interactionIn nuclear physics and particle physics, the weak interaction, which is also often called the weak force or weak nuclear force, is one of the four known fundamental interactions, with the others being electromagnetism, the strong interaction, and gravitation. It is the mechanism of interaction between subatomic particles that is responsible for the radioactive decay of atoms: The weak interaction participates in nuclear fission and nuclear fusion.
Beta decayIn nuclear physics, beta decay (β-decay) is a type of radioactive decay in which an atomic nucleus emits a beta particle (fast energetic electron or positron), transforming into an isobar of that nuclide. For example, beta decay of a neutron transforms it into a proton by the emission of an electron accompanied by an antineutrino; or, conversely a proton is converted into a neutron by the emission of a positron with a neutrino in so-called positron emission.
Average absolute deviationThe average absolute deviation (AAD) of a data set is the average of the absolute deviations from a central point. It is a summary statistic of statistical dispersion or variability. In the general form, the central point can be a mean, median, mode, or the result of any other measure of central tendency or any reference value related to the given data set. AAD includes the mean absolute deviation and the median absolute deviation (both abbreviated as MAD). Several measures of statistical dispersion are defined in terms of the absolute deviation.
Neutral particle oscillationIn particle physics, neutral particle oscillation is the transmutation of a particle with zero electric charge into another neutral particle due to a change of a non-zero internal quantum number, via an interaction that does not conserve that quantum number. Neutral particle oscillations were first investigated in 1954 by Murray Gell-mann and Abraham Pais. For example, a neutron cannot transmute into an antineutron as that would violate the conservation of baryon number.
LHCb experimentThe LHCb (Large Hadron Collider beauty) experiment is a particle physics detector experiment collecting data at the Large Hadron Collider at CERN. LHCb is a specialized b-physics experiment, designed primarily to measure the parameters of CP violation in the interactions of b-hadrons (heavy particles containing a bottom quark). Such studies can help to explain the matter-antimatter asymmetry of the Universe. The detector is also able to perform measurements of production cross sections, exotic hadron spectroscopy, charm physics and electroweak physics in the forward region.
Generalized continued fractionIn complex analysis, a branch of mathematics, a generalized continued fraction is a generalization of regular continued fractions in canonical form, in which the partial numerators and partial denominators can assume arbitrary complex values. A generalized continued fraction is an expression of the form where the an (n > 0) are the partial numerators, the bn are the partial denominators, and the leading term b0 is called the integer part of the continued fraction.
Sampling (statistics)In statistics, quality assurance, and survey methodology, sampling is the selection of a subset or a statistical sample (termed sample for short) of individuals from within a statistical population to estimate characteristics of the whole population. Statisticians attempt to collect samples that are representative of the population. Sampling has lower costs and faster data collection compared to recording data from the entire population, and thus, it can provide insights in cases where it is infeasible to measure an entire population.
Coefficient of variationIn probability theory and statistics, the coefficient of variation (COV), also known as Normalized Root-Mean-Square Deviation (NRMSD), Percent RMS, and relative standard deviation (RSD), is a standardized measure of dispersion of a probability distribution or frequency distribution. It is defined as the ratio of the standard deviation to the mean (or its absolute value, , and often expressed as a percentage ("%RSD"). The CV or RSD is widely used in analytical chemistry to express the precision and repeatability of an assay.
Exponential functionThe exponential function is a mathematical function denoted by or (where the argument x is written as an exponent). Unless otherwise specified, the term generally refers to the positive-valued function of a real variable, although it can be extended to the complex numbers or generalized to other mathematical objects like matrices or Lie algebras. The exponential function originated from the notion of exponentiation (repeated multiplication), but modern definitions (there are several equivalent characterizations) allow it to be rigorously extended to all real arguments, including irrational numbers.
Robust measures of scaleIn statistics, robust measures of scale are methods that quantify the statistical dispersion in a sample of numerical data while resisting outliers. The most common such robust statistics are the interquartile range (IQR) and the median absolute deviation (MAD). These are contrasted with conventional or non-robust measures of scale, such as sample standard deviation, which are greatly influenced by outliers.
Mathematical formulation of the Standard ModelThis article describes the mathematics of the Standard Model of particle physics, a gauge quantum field theory containing the internal symmetries of the unitary product group SU(3) × SU(2) × U(1). The theory is commonly viewed as describing the fundamental set of particles – the leptons, quarks, gauge bosons and the Higgs boson. The Standard Model is renormalizable and mathematically self-consistent, however despite having huge and continued successes in providing experimental predictions it does leave some unexplained phenomena.