Vector spaceIn mathematics and physics, a vector space (also called a linear space) is a set whose elements, often called vectors, may be added together and multiplied ("scaled") by numbers called scalars. Scalars are often real numbers, but can be complex numbers or, more generally, elements of any field. The operations of vector addition and scalar multiplication must satisfy certain requirements, called vector axioms. The terms real vector space and complex vector space are often used to specify the nature of the scalars: real coordinate space or complex coordinate space.
Matrix decompositionIn the mathematical discipline of linear algebra, a matrix decomposition or matrix factorization is a factorization of a matrix into a product of matrices. There are many different matrix decompositions; each finds use among a particular class of problems. In numerical analysis, different decompositions are used to implement efficient matrix algorithms. For instance, when solving a system of linear equations , the matrix A can be decomposed via the LU decomposition.
Definite matrixIn mathematics, a symmetric matrix with real entries is positive-definite if the real number is positive for every nonzero real column vector where is the transpose of . More generally, a Hermitian matrix (that is, a complex matrix equal to its conjugate transpose) is positive-definite if the real number is positive for every nonzero complex column vector where denotes the conjugate transpose of Positive semi-definite matrices are defined similarly, except that the scalars and are required to be positive or zero (that is, nonnegative).
Cholesky decompositionIn linear algebra, the Cholesky decomposition or Cholesky factorization (pronounced ʃəˈlɛski ) is a decomposition of a Hermitian, positive-definite matrix into the product of a lower triangular matrix and its conjugate transpose, which is useful for efficient numerical solutions, e.g., Monte Carlo simulations. It was discovered by André-Louis Cholesky for real matrices, and posthumously published in 1924. When it is applicable, the Cholesky decomposition is roughly twice as efficient as the LU decomposition for solving systems of linear equations.
Eigenvalues and eigenvectorsIn linear algebra, an eigenvector (ˈaɪgənˌvɛktər) or characteristic vector of a linear transformation is a nonzero vector that changes at most by a constant factor when that linear transformation is applied to it. The corresponding eigenvalue, often represented by , is the multiplying factor. Geometrically, a transformation matrix rotates, stretches, or shears the vectors it acts upon. The eigenvectors for a linear transformation matrix are the set of vectors that are only stretched, with no rotation or shear.
Topological vector spaceIn mathematics, a topological vector space (also called a linear topological space and commonly abbreviated TVS or t.v.s.) is one of the basic structures investigated in functional analysis. A topological vector space is a vector space that is also a topological space with the property that the vector space operations (vector addition and scalar multiplication) are also continuous functions. Such a topology is called a and every topological vector space has a uniform topological structure, allowing a notion of uniform convergence and completeness.
Square root of a matrixIn mathematics, the square root of a matrix extends the notion of square root from numbers to matrices. A matrix B is said to be a square root of A if the matrix product BB is equal to A. Some authors use the name square root or the notation A1/2 only for the specific case when A is positive semidefinite, to denote the unique matrix B that is positive semidefinite and such that BB = BTB = A (for real-valued matrices, where BT is the transpose of B).
Normed vector spaceIn mathematics, a normed vector space or normed space is a vector space over the real or complex numbers on which a norm is defined. A norm is a generalization of the intuitive notion of "length" in the physical world. If is a vector space over , where is a field equal to or to , then a norm on is a map , typically denoted by , satisfying the following four axioms: Non-negativity: for every ,. Positive definiteness: for every , if and only if is the zero vector.
Ordered vector spaceIn mathematics, an ordered vector space or partially ordered vector space is a vector space equipped with a partial order that is compatible with the vector space operations. Given a vector space over the real numbers and a preorder on the set the pair is called a preordered vector space and we say that the preorder is compatible with the vector space structure of and call a vector preorder on if for all and with the following two axioms are satisfied implies implies If is a partial order compatible with the vector space structure of then is called an ordered vector space and is called a vector partial order on The two axioms imply that translations and positive homotheties are automorphisms of the order structure and the mapping is an isomorphism to the dual order structure.
Dimension (vector space)In mathematics, the dimension of a vector space V is the cardinality (i.e., the number of vectors) of a basis of V over its base field. It is sometimes called Hamel dimension (after Georg Hamel) or algebraic dimension to distinguish it from other types of dimension. For every vector space there exists a basis, and all bases of a vector space have equal cardinality; as a result, the dimension of a vector space is uniquely defined. We say is if the dimension of is finite, and if its dimension is infinite.
Matrix (mathematics)In mathematics, a matrix (plural matrices) is a rectangular array or table of numbers, symbols, or expressions, arranged in rows and columns, which is used to represent a mathematical object or a property of such an object. For example, is a matrix with two rows and three columns. This is often referred to as a "two by three matrix", a " matrix", or a matrix of dimension . Without further specifications, matrices represent linear maps, and allow explicit computations in linear algebra.
Complete topological vector spaceIn functional analysis and related areas of mathematics, a complete topological vector space is a topological vector space (TVS) with the property that whenever points get progressively closer to each other, then there exists some point towards which they all get closer. The notion of "points that get progressively closer" is made rigorous by or , which are generalizations of , while "point towards which they all get closer" means that this Cauchy net or filter converges to The notion of completeness for TVSs uses the theory of uniform spaces as a framework to generalize the notion of completeness for metric spaces.
Linear mapIn mathematics, and more specifically in linear algebra, a linear map (also called a linear mapping, linear transformation, vector space homomorphism, or in some contexts linear function) is a mapping between two vector spaces that preserves the operations of vector addition and scalar multiplication. The same names and the same definition are also used for the more general case of modules over a ring; see Module homomorphism. If a linear map is a bijection then it is called a .
Graded vector spaceIn mathematics, a graded vector space is a vector space that has the extra structure of a grading or gradation, which is a decomposition of the vector space into a direct sum of vector subspaces, generally indexed by the integers. For "pure" vector spaces, the concept has been introduced in homological algebra, and it is widely used for graded algebras, which are graded vector spaces with additional structures. Let be the set of non-negative integers.
Metrizable topological vector spaceIn functional analysis and related areas of mathematics, a metrizable (resp. pseudometrizable) topological vector space (TVS) is a TVS whose topology is induced by a metric (resp. pseudometric). An LM-space is an inductive limit of a sequence of locally convex metrizable TVS.
Linear algebraLinear algebra is the branch of mathematics concerning linear equations such as: linear maps such as: and their representations in vector spaces and through matrices. Linear algebra is central to almost all areas of mathematics. For instance, linear algebra is fundamental in modern presentations of geometry, including for defining basic objects such as lines, planes and rotations. Also, functional analysis, a branch of mathematical analysis, may be viewed as the application of linear algebra to spaces of functions.
Swiss AlpsThe Alpine region of Switzerland, conventionally referred to as the Swiss Alps (Schweizer Alpen, Alpes suisses, Alpi svizzere, Alps svizras), represents a major natural feature of the country and is, along with the Swiss Plateau and the Swiss portion of the Jura Mountains, one of its three main physiographic regions. The Swiss Alps extend over both the Western Alps and the Eastern Alps, encompassing an area sometimes called Central Alps.
Quantum Fourier transformIn quantum computing, the quantum Fourier transform (QFT) is a linear transformation on quantum bits, and is the quantum analogue of the discrete Fourier transform. The quantum Fourier transform is a part of many quantum algorithms, notably Shor's algorithm for factoring and computing the discrete logarithm, the quantum phase estimation algorithm for estimating the eigenvalues of a unitary operator, and algorithms for the hidden subgroup problem. The quantum Fourier transform was discovered by Don Coppersmith.
Financial econometricsFinancial econometrics is the application of statistical methods to financial market data. Financial econometrics is a branch of financial economics, in the field of economics. Areas of study include capital markets, financial institutions, corporate finance and corporate governance. Topics often revolve around asset valuation of individual stocks, bonds, derivatives, currencies and other financial instruments. It differs from other forms of econometrics because the emphasis is usually on analyzing the prices of financial assets traded at competitive, liquid markets.
Central limit theoremIn probability theory, the central limit theorem (CLT) establishes that, in many situations, for independent and identically distributed random variables, the sampling distribution of the standardized sample mean tends towards the standard normal distribution even if the original variables themselves are not normally distributed. The theorem is a key concept in probability theory because it implies that probabilistic and statistical methods that work for normal distributions can be applicable to many problems involving other types of distributions.