title
stringlengths 1
113
| text
stringlengths 9
3.55k
| source
stringclasses 1
value |
|---|---|---|
critical flow
|
In marine hydrodynamic applications, the Froude number is usually referenced with the notation Fn and is defined as: where u is the relative flow velocity between the sea and ship, g is in particular the acceleration due to gravity, and L is the length of the ship at the water line level, or Lwl in some notations. It is an important parameter with respect to the ship's drag, or resistance, especially in terms of wave making resistance. In the case of planing crafts, where the waterline length is too speed-dependent to be meaningful, the Froude number is best defined as displacement Froude number and the reference length is taken as the cubic root of the volumetric displacement of the hull:
|
wikipedia
|
marketing engineering
|
In marketing engineering methods and models can be classified in several categories:
|
wikipedia
|
observational techniques
|
In marketing research, the most frequently used types of observational techniques are: Personal observation observing products in use to detect usage patterns and problems observing license plates in store parking lots determining the socio-economic status of shoppers determining the level of package scrutiny determining the time it takes to make a purchase decision Mechanical observationeye-tracking analysis while subjects watch advertisements oculometers – what the subject is looking at pupilometers – how interested is the viewer electronic checkout scanners – records purchase behaviour on-site cameras in stores people meters (as in monitoring television viewing) e.g.Nielsen box voice pitch meters – measures emotional reactions psychogalvanometer – measures galvanic skin response Auditsretail audits to determine the quality of service in stores inventory audits to determine product acceptance shelf space audits scanner based audits Trace Analysiscredit card records computer cookie records garbology – looking for traces of purchase patterns in garbage detecting store traffic patterns by observing the wear in the floor (long term) or the dirt on the floor (short term) exposure to advertisements Content analysisobserve the content of magazines, television broadcasts, radio broadcasts, or newspapers, either articles, programs, or advertisements
|
wikipedia
|
brand development
|
In marketing, brand management begins with an analysis on how a brand is currently perceived in the market, proceeds to planning how the brand should be perceived if it is to achieve its objectives and continues with ensuring that the brand is perceived as planned and secures its objectives. Developing a good relationship with target markets is essential for brand management. Tangible elements of brand management include the product itself; its look, price, and packaging, etc. The intangible elements are the experiences that the target markets share with the brand, and also the relationships they have with the brand. A brand manager would oversee all aspects of the consumer's brand association as well as relationships with members of the supply chain.
|
wikipedia
|
fuzzy clustering
|
In marketing, customers can be grouped into fuzzy clusters based on their needs, brand choices, psycho-graphic profiles, or other marketing related partitions.
|
wikipedia
|
linear discriminant analysis
|
In marketing, discriminant analysis was once often used to determine the factors which distinguish different types of customers and/or products on the basis of surveys or other forms of collected data. Logistic regression or other methods are now more commonly used. The use of discriminant analysis in marketing can be described by the following steps: Formulate the problem and gather data—Identify the salient attributes consumers use to evaluate products in this category—Use quantitative marketing research techniques (such as surveys) to collect data from a sample of potential customers concerning their ratings of all the product attributes. The data collection stage is usually done by marketing research professionals.
|
wikipedia
|
linear discriminant analysis
|
Survey questions ask the respondent to rate a product from one to five (or 1 to 7, or 1 to 10) on a range of attributes chosen by the researcher. Anywhere from five to twenty attributes are chosen. They could include things like: ease of use, weight, accuracy, durability, colourfulness, price, or size.
|
wikipedia
|
linear discriminant analysis
|
The attributes chosen will vary depending on the product being studied. The same question is asked about all the products in the study. The data for multiple products is codified and input into a statistical program such as R, SPSS or SAS.
|
wikipedia
|
linear discriminant analysis
|
(This step is the same as in Factor analysis). Estimate the Discriminant Function Coefficients and determine the statistical significance and validity—Choose the appropriate discriminant analysis method. The direct method involves estimating the discriminant function so that all the predictors are assessed simultaneously.
|
wikipedia
|
linear discriminant analysis
|
The stepwise method enters the predictors sequentially. The two-group method should be used when the dependent variable has two categories or states. The multiple discriminant method is used when the dependent variable has three or more categorical states.
|
wikipedia
|
linear discriminant analysis
|
Use Wilks's Lambda to test for significance in SPSS or F stat in SAS. The most common method used to test validity is to split the sample into an estimation or analysis sample, and a validation or holdout sample. The estimation sample is used in constructing the discriminant function.
|
wikipedia
|
linear discriminant analysis
|
The validation sample is used to construct a classification matrix which contains the number of correctly classified and incorrectly classified cases. The percentage of correctly classified cases is called the hit ratio. Plot the results on a two dimensional map, define the dimensions, and interpret the results.
|
wikipedia
|
linear discriminant analysis
|
The statistical program (or a related module) will map the results. The map will plot each product (usually in two-dimensional space).
|
wikipedia
|
linear discriminant analysis
|
The distance of products to each other indicate either how different they are. The dimensions must be labelled by the researcher. This requires subjective judgement and is often very challenging. See perceptual mapping.
|
wikipedia
|
bundled software
|
In marketing, product bundling is offering several products or services for sale as one combined product or service package. It is a common feature in many imperfectly competitive product and service markets. Industries engaged in the practice include telecommunications services, financial services, health care, information, and consumer electronics.
|
wikipedia
|
bundled software
|
A software bundle might include a word processor, spreadsheet, and presentation program into a single office suite. The cable television industry often bundles many TV and movie channels into a single tier or package. The fast food industry combines separate food items into a "meal deal" or "value meal".
|
wikipedia
|
bundled software
|
A bundle of products may be called a package deal; in recorded music or video games, a compilation or box set; or in publishing, an anthology. Most firms are multi-product or multi-service companies faced with the decision whether to sell products or services separately at individual prices or whether combinations of products should be marketed in the form of "bundles" for which a "bundle price" is asked. Price bundling plays an increasingly important role in many industries (e.g. banking, insurance, software, automotive) and some companies even build their business strategies on bundling. In bundle pricing, companies sell a package or set of goods or services for a lower price than they would charge if the customer bought all of them separately. Pursuing a bundle pricing strategy allows a business to increase its profit by using a discount to induce customers to buy more than they otherwise would have.
|
wikipedia
|
loss aversion
|
In marketing, the use of trial periods and rebates tries to take advantage of the buyer's tendency to value the good more after the buyer incorporates it in the status quo. In past behavioral economics studies, users participate up until the threat of loss equals any incurred gains. Recent methods established by Botond Kőszegi and Matthew Rabin in experimental economics illustrates the role of expectation, wherein an individual's belief about an outcome can create an instance of loss aversion, whether or not a tangible change of state has occurred. Whether a transaction is framed as a loss or as a gain is important to this calculation.
|
wikipedia
|
loss aversion
|
The same change in price framed differently, for example as a $5 discount or as a $5 surcharge avoided, has a significant effect on consumer behavior. Although traditional economists consider this "endowment effect", and all other effects of loss aversion, to be completely irrational, it is important to the fields of marketing and behavioral finance. Users in behavioral and experimental economics studies decided to cease participation in iterative money-making games when the threat of loss was close to the expenditure of effort, even when the user stood to further their gains. Loss aversion coupled with myopia has been shown to explain macroeconomic phenomena, such as the equity premium puzzle.
|
wikipedia
|
computer media
|
In mass communication, digital media is any communication media that operate in conjunction with various encoded machine-readable data formats. Digital content can be created, viewed, distributed, modified, listened to, and preserved on a digital electronics device, including digital data storage media (in contrast to analog electronic media) and digital broadcasting. Digital defines as any data represented by a series of digits, and media refers to methods of broadcasting or communicating this information. Together, digital media refers to mediums of digitized information broadcast through a screen and/or a speaker. This also includes text, audio, video, and graphics that are transmitted over the internet for viewing or listening to on the internet.Digital media platforms, such as YouTube, Vimeo, and Twitch, accounted for viewership rates of 27.9 billion hours in 2020. A contributing factor to its part in what is commonly referred to as the digital revolution can be attributed to the use of interconnectivity.
|
wikipedia
|
electron multiplier
|
In mass spectrometry electron multipliers are often used as a detector of ions that have been separated by a mass analyzer of some sort. They can be the continuous-dynode type and may have a curved horn-like funnel shape or can have discrete dynodes as in a photomultiplier. Continuous dynode electron multipliers are also used in NASA missions and are coupled to a gas chromatography mass spectrometer (GC-MS) which allows scientists to determine the amount and types of gasses present on Titan, Saturn's largest moon.
|
wikipedia
|
data-independent acquisition
|
In mass spectrometry, data-independent acquisition (DIA) is a method of molecular structure determination in which all ions within a selected m/z range are fragmented and analyzed in a second stage of tandem mass spectrometry. Tandem mass spectra are acquired either by fragmenting all ions that enter the mass spectrometer at a given time (called broadband DIA) or by sequentially isolating and fragmenting ranges of m/z. DIA is an alternative to data-dependent acquisition (DDA) where a fixed number of precursor ions are selected and analyzed by tandem mass spectrometry.
|
wikipedia
|
fragmentation pattern
|
In mass spectrometry, fragmentation is the dissociation of energetically unstable molecular ions formed from passing the molecules mass spectrum. These reactions are well documented over the decades and fragmentation patterns are useful to determine the molar weight and structural information of unknown molecules. Fragmentation that occurs in tandem mass spectrometry experiments has been a recent focus of research, because this data helps facilitate the identification of molecules.
|
wikipedia
|
golden record (informatics)
|
In master data management (MDM), the golden copy refers to the master data (master version) of the reference data which works as an authoritative source for the "truth" for all applications in a given IT landscape.
|
wikipedia
|
functionally graded material
|
In materials science Functionally Graded Materials (FGMs) may be characterized by the variation in composition and structure gradually over volume, resulting in corresponding changes in the properties of the material. The materials can be designed for specific function and applications. Various approaches based on the bulk (particulate processing), preform processing, layer processing and melt processing are used to fabricate the functionally graded materials.
|
wikipedia
|
yield (engineering)
|
In materials science and engineering, the yield point is the point on a stress-strain curve that indicates the limit of elastic behavior and the beginning of plastic behavior. Below the yield point, a material will deform elastically and will return to its original shape when the applied stress is removed. Once the yield point is passed, some fraction of the deformation will be permanent and non-reversible and is known as plastic deformation. The yield strength or yield stress is a material property and is the stress corresponding to the yield point at which the material begins to deform plastically.
|
wikipedia
|
yield (engineering)
|
The yield strength is often used to determine the maximum allowable load in a mechanical component, since it represents the upper limit to forces that can be applied without producing permanent deformation. In some materials, such as aluminium, there is a gradual onset of non-linear behavior, and no precise yield point. In such a case, the offset yield point (or proof stress) is taken as the stress at which 0.2% plastic deformation occurs.
|
wikipedia
|
yield (engineering)
|
Yielding is a gradual failure mode which is normally not catastrophic, unlike ultimate failure. In solid mechanics, the yield point can be specified in terms of the three-dimensional principal stresses ( σ 1 , σ 2 , σ 3 {\displaystyle \sigma _{1},\sigma _{2},\sigma _{3}} ) with a yield surface or a yield criterion. A variety of yield criteria have been developed for different materials.
|
wikipedia
|
functionally graded element
|
In materials science and mathematics, functionally graded elements are elements used in finite element analysis. They can be used to describe a functionally graded material.
|
wikipedia
|
work hardened
|
In materials science parlance, dislocations are defined as line defects in a material's crystal structure. The bonds surrounding the dislocation are already elastically strained by the defect compared to the bonds between the constituents of the regular crystal lattice. Therefore, these bonds break at relatively lower stresses, leading to plastic deformation. The strained bonds around a dislocation are characterized by lattice strain fields.
|
wikipedia
|
work hardened
|
For example, there are compressively strained bonds directly next to an edge dislocation and tensilely strained bonds beyond the end of an edge dislocation. These form compressive strain fields and tensile strain fields, respectively. Strain fields are analogous to electric fields in certain ways.
|
wikipedia
|
work hardened
|
Specifically, the strain fields of dislocations obey similar laws of attraction and repulsion; in order to reduce overall strain, compressive strains are attracted to tensile strains, and vice versa. The visible (macroscopic) results of plastic deformation are the result of microscopic dislocation motion. For example, the stretching of a steel rod in a tensile tester is accommodated through dislocation motion on the atomic scale.
|
wikipedia
|
advanced composite materials (engineering)
|
In materials science, advanced composite materials (ACMs) are materials that are generally characterized by unusually high strength fibres with unusually high stiffness, or modulus of elasticity characteristics, compared to other materials, while bound together by weaker matrices. These are termed "advanced composite materials" in comparison to the composite materials commonly in use such as reinforced concrete, or even concrete itself. The high strength fibers are also low density while occupying a large fraction of the volume.
|
wikipedia
|
advanced composite materials (engineering)
|
Advanced composites exhibit desirable physical and chemical properties that include light weight coupled with high stiffness (elasticity), and strength along the direction of the reinforcing fiber, dimensional stability, temperature and chemical resistance, flex performance, and relatively easy processing. Advanced composites are replacing metal components in many uses, particularly in the aerospace industry. Composites are classified according to their matrix phase.
|
wikipedia
|
advanced composite materials (engineering)
|
These classifications are polymer matrix composites (PMCs), ceramic matrix composites (CMCs), and metal matrix composites (MMCs). Also, materials within these categories are often called "advanced" if they combine the properties of high (axial, longitudinal) strength values and high (axial, longitudinal) stiffness values, with low weight, corrosion resistance, and in some cases special electrical properties. Advanced composite materials have broad, proven applications, in the aircraft, aerospace, and sports equipment sectors.
|
wikipedia
|
advanced composite materials (engineering)
|
Even more specifically ACMs are very attractive for aircraft and aerospace structural parts. ACMs have been developing for NASA's Advanced Space Transportation Program, armor protection for Army aviation and the Federal Aviation Administration of the USA, and high-temperature shafting for the Comanche helicopter. Additionally, ACMs have a decades long history in military and government aerospace industries. However, much of the technology is new and not presented formally in secondary or undergraduate education, and the technology of advanced composites manufacture is continually evolving.
|
wikipedia
|
intrinsic properties
|
In materials science, an intrinsic property is independent of how much of a material is present and is independent of the form of the material, e.g., one large piece or a collection of small particles. Intrinsic properties are dependent mainly on the fundamental chemical composition and structure of the material. Extrinsic properties are differentiated as being dependent on the presence of avoidable chemical contaminants or structural defects.In biology, intrinsic effects originate from inside an organism or cell, such as an autoimmune disease or intrinsic immunity. In electronics and optics, intrinsic properties of devices (or systems of devices) are generally those that are free from the influence of various types of non-essential defects. Such defects may arise as a consequence of design imperfections, manufacturing errors, or operational extremes and can produce distinctive and often undesirable extrinsic properties. The identification, optimization, and control of both intrinsic and extrinsic properties are among the engineering tasks necessary to achieve the high performance and reliability of modern electrical and optical systems.
|
wikipedia
|
euler angle
|
In materials science, crystallographic texture (or preferred orientation) can be described using Euler angles. In texture analysis, the Euler angles provide a mathematical depiction of the orientation of individual crystallites within a polycrystalline material, allowing for the quantitative description of the macroscopic material. The most common definition of the angles is due to Bunge and corresponds to the ZXZ convention. It is important to note, however, that the application generally involves axis transformations of tensor quantities, i.e. passive rotations. Thus the matrix that corresponds to the Bunge Euler angles is the transpose of that shown in the table above.
|
wikipedia
|
formative evaluation
|
In math education, it is important for teachers to see how their students approach the problems and how much mathematical knowledge and at what level students use when solving the problems. That is, knowing how students think in the process of learning or problem solving makes it possible for teachers to help their students overcome conceptual difficulties and, in turn, improve learning. In that sense, formative assessment is diagnostic. To employ formative assessment in the classrooms, a teacher has to make sure that each student participates in the learning process by expressing their ideas; there is a trustful environment in which students can provide each other with feedback; s/he (the teacher) provides students with feedback; and the instruction is modified according to students' needs. In math classes, thought revealing activities such as model-eliciting activities (MEAs) and generative activities provide good opportunities for covering these aspects of formative assessment.
|
wikipedia
|
sigma field
|
In mathematical analysis and in probability theory, a σ-algebra (also σ-field) on a set X is a nonempty collection Σ of subsets of X closed under complement, countable unions, and countable intersections. The ordered pair ( X , Σ ) {\displaystyle (X,\Sigma )} is called a measurable space. The σ-algebras are a subset of the set algebras; elements of the latter only need to be closed under the union or intersection of finitely many subsets, which is a weaker condition.The main use of σ-algebras is in the definition of measures; specifically, the collection of those subsets for which a given measure is defined is necessarily a σ-algebra. This concept is important in mathematical analysis as the foundation for Lebesgue integration, and in probability theory, where it is interpreted as the collection of events which can be assigned probabilities.
|
wikipedia
|
sigma field
|
Also, in probability, σ-algebras are pivotal in the definition of conditional expectation. In statistics, (sub) σ-algebras are needed for the formal mathematical definition of a sufficient statistic, particularly when the statistic is a function or a random process and the notion of conditional density is not applicable. If X = { a , b , c , d } {\displaystyle X=\{a,b,c,d\}} one possible σ-algebra on X {\displaystyle X} is Σ = { ∅ , { a , b } , { c , d } , { a , b , c , d } } , {\displaystyle \Sigma =\{\varnothing ,\{a,b\},\{c,d\},\{a,b,c,d\}\},} where ∅ {\displaystyle \varnothing } is the empty set.
|
wikipedia
|
sigma field
|
In general, a finite algebra is always a σ-algebra. If { A 1 , A 2 , A 3 , … } , {\displaystyle \{A_{1},A_{2},A_{3},\ldots \},} is a countable partition of X {\displaystyle X} then the collection of all unions of sets in the partition (including the empty set) is a σ-algebra. A more useful example is the set of subsets of the real line formed by starting with all open intervals and adding in all countable unions, countable intersections, and relative complements and continuing this process (by transfinite iteration through all countable ordinals) until the relevant closure properties are achieved (a construction known as the Borel hierarchy).
|
wikipedia
|
multi-variable function
|
In mathematical analysis and its applications, a function of several real variables or real multivariate function is a function with more than one argument, with all arguments being real variables. This concept extends the idea of a function of a real variable to several variables. The "input" variables take real values, while the "output", also called the "value of the function", may be real or complex. However, the study of the complex-valued functions may be easily reduced to the study of the real-valued functions, by considering the real and imaginary parts of the complex function; therefore, unless explicitly specified, only real-valued functions will be considered in this article. The domain of a function of n variables is the subset of R n {\displaystyle \mathbb {R} ^{n}} for which the function is defined. As usual, the domain of a function of several real variables is supposed to contain a nonempty open subset of R n {\displaystyle \mathbb {R} ^{n}} .
|
wikipedia
|
bernstein's theorem (polynomials)
|
In mathematical analysis, Bernstein's inequality states that on the complex plane, within the disk of radius 1, the degree of a polynomial times the maximum value of a polynomial is an upper bound for the similar maximum of its derivative. Taking the k-th derivative of the theorem, max | z | ≤ 1 ( | P ( k ) ( z ) | ) ≤ n ! ( n − k ) ! ⋅ max | z | ≤ 1 ( | P ( z ) | ) . {\displaystyle \max _{|z|\leq 1}(|P^{(k)}(z)|)\leq {\frac {n!}{(n-k)! }}\cdot \max _{|z|\leq 1}(|P(z)|).}
|
wikipedia
|
ivar ekeland
|
In mathematical analysis, Ekeland's variational principle, discovered by Ivar Ekeland, is a theorem that asserts that there exist a nearly optimal solution to a class of optimization problems.Ekeland's variational principle can be used when the lower level set of a minimization problem is not compact, so that the Bolzano–Weierstrass theorem can not be applied. Ekeland's principle relies on the completeness of the metric space.Ekeland's principle leads to a quick proof of the Caristi fixed point theorem.Ekeland was associated with the University of Paris when he proposed this theorem.
|
wikipedia
|
leonid kantorovich
|
In mathematical analysis, Kantorovich had important results in functional analysis, approximation theory, and operator theory. In particular, Kantorovich formulated some fundamental results in the theory of normed vector lattices, especially in Dedekind complete vector lattices called "K-spaces" which are now referred to as "Kantorovich spaces" in his honor. Kantorovich showed that functional analysis could be used in the analysis of iterative methods, obtaining the Kantorovich inequalities on the convergence rate of the gradient method and of Newton's method (see the Kantorovich theorem). Kantorovich considered infinite-dimensional optimization problems, such as the Kantorovich-Monge problem in transport theory. His analysis proposed the Kantorovich–Rubinstein metric, which is used in probability theory, in the theory of the weak convergence of probability measures.
|
wikipedia
|
lipschitz function
|
In mathematical analysis, Lipschitz continuity, named after German mathematician Rudolf Lipschitz, is a strong form of uniform continuity for functions. Intuitively, a Lipschitz continuous function is limited in how fast it can change: there exists a real number such that, for every pair of points on the graph of this function, the absolute value of the slope of the line connecting them is not greater than this real number; the smallest such bound is called the Lipschitz constant of the function (and is related to the modulus of uniform continuity). For instance, every function that is defined on an interval and has bounded first derivative is Lipschitz continuous.In the theory of differential equations, Lipschitz continuity is the central condition of the Picard–Lindelöf theorem which guarantees the existence and uniqueness of the solution to an initial value problem. A special type of Lipschitz continuity, called contraction, is used in the Banach fixed-point theorem.We have the following chain of strict inclusions for functions over a closed and bounded non-trivial interval of the real line: Continuously differentiable ⊂ Lipschitz continuous ⊂ α {\displaystyle \alpha } -Hölder continuous,where 0 < α ≤ 1 {\displaystyle 0<\alpha \leq 1} . We also have Lipschitz continuous ⊂ absolutely continuous ⊂ uniformly continuous.
|
wikipedia
|
carathéodory function
|
In mathematical analysis, a Carathéodory function (or Carathéodory integrand) is a multivariable function that allows us to solve the following problem effectively: A composition of two Lebesgue-measurable functions does not have to be Lebesgue-measurable as well. Nevertheless, a composition of a measurable function with a continuous function is indeed Lebesgue-measurable, but in many situations, continuity is a too restrictive assumption. Carathéodory functions are more general than continuous functions, but still allow a composition with Lebesgue-measurable function to be measurable. Carathéodory functions play a significant role in calculus of variation, and it is named after the Greek mathematician Constantin Carathéodory.
|
wikipedia
|
hermitian function
|
In mathematical analysis, a Hermitian function is a complex function with the property that its complex conjugate is equal to the original function with the variable changed in sign: f ∗ ( x ) = f ( − x ) {\displaystyle f^{*}(x)=f(-x)} (where the ∗ {\displaystyle ^{*}} indicates the complex conjugate) for all x {\displaystyle x} in the domain of f {\displaystyle f} . In physics, this property is referred to as PT symmetry. This definition extends also to functions of two or more variables, e.g., in the case that f {\displaystyle f} is a function of two variables it is Hermitian if f ∗ ( x 1 , x 2 ) = f ( − x 1 , − x 2 ) {\displaystyle f^{*}(x_{1},x_{2})=f(-x_{1},-x_{2})} for all pairs ( x 1 , x 2 ) {\displaystyle (x_{1},x_{2})} in the domain of f {\displaystyle f} . From this definition it follows immediately that: f {\displaystyle f} is a Hermitian function if and only if the real part of f {\displaystyle f} is an even function, the imaginary part of f {\displaystyle f} is an odd function.
|
wikipedia
|
function of bounded variation
|
In mathematical analysis, a function of bounded variation, also known as BV function, is a real-valued function whose total variation is bounded (finite): the graph of a function having this property is well behaved in a precise sense. For a continuous function of a single variable, being of bounded variation means that the distance along the direction of the y-axis, neglecting the contribution of motion along x-axis, traveled by a point moving along the graph has a finite value. For a continuous function of several variables, the meaning of the definition is the same, except for the fact that the continuous path to be considered cannot be the whole graph of the given function (which is a hypersurface in this case), but can be every intersection of the graph itself with a hyperplane (in the case of functions of two variables, a plane) parallel to a fixed x-axis and to the y-axis. Functions of bounded variation are precisely those with respect to which one may find Riemann–Stieltjes integrals of all continuous functions.
|
wikipedia
|
function of bounded variation
|
Another characterization states that the functions of bounded variation on a compact interval are exactly those f which can be written as a difference g − h, where both g and h are bounded monotone. In particular, a BV function may have discontinuities, but at most countably many.
|
wikipedia
|
function of bounded variation
|
In the case of several variables, a function f defined on an open subset Ω of R n {\displaystyle \mathbb {R} ^{n}} is said to have bounded variation if its distributional derivative is a vector-valued finite Radon measure. One of the most important aspects of functions of bounded variation is that they form an algebra of discontinuous functions whose first derivative exists almost everywhere: due to this fact, they can and frequently are used to define generalized solutions of nonlinear problems involving functionals, ordinary and partial differential equations in mathematics, physics and engineering. We have the following chains of inclusions for continuous functions over a closed, bounded interval of the real line: Continuously differentiable ⊆ Lipschitz continuous ⊆ absolutely continuous ⊆ continuous and bounded variation ⊆ differentiable almost everywhere
|
wikipedia
|
function of a real variable
|
In mathematical analysis, and applications in geometry, applied mathematics, engineering, and natural sciences, a function of a real variable is a function whose domain is the real numbers R {\displaystyle \mathbb {R} } , or a subset of R {\displaystyle \mathbb {R} } that contains an interval of positive length. Most real functions that are considered and studied are differentiable in some interval. The most widely considered such functions are the real functions, which are the real-valued functions of a real variable, that is, the functions of a real variable whose codomain is the set of real numbers. Nevertheless, the codomain of a function of a real variable may be any set.
|
wikipedia
|
function of a real variable
|
However, it is often assumed to have a structure of R {\displaystyle \mathbb {R} } -vector space over the reals. That is, the codomain may be a Euclidean space, a coordinate vector, the set of matrices of real numbers of a given size, or an R {\displaystyle \mathbb {R} } -algebra, such as the complex numbers or the quaternions. The structure R {\displaystyle \mathbb {R} } -vector space of the codomain induces a structure of R {\displaystyle \mathbb {R} } -vector space on the functions.
|
wikipedia
|
function of a real variable
|
If the codomain has a structure of R {\displaystyle \mathbb {R} } -algebra, the same is true for the functions. The image of a function of a real variable is a curve in the codomain. In this context, a function that defines curve is called a parametric equation of the curve. When the codomain of a function of a real variable is a finite-dimensional vector space, the function may be viewed as a sequence of real functions. This is often used in applications.
|
wikipedia
|
continuous functions on a compact hausdorff space
|
In mathematical analysis, and especially functional analysis, a fundamental role is played by the space of continuous functions on a compact Hausdorff space X {\displaystyle X} with values in the real or complex numbers. This space, denoted by C ( X ) , {\displaystyle {\mathcal {C}}(X),} is a vector space with respect to the pointwise addition of functions and scalar multiplication by constants. It is, moreover, a normed space with norm defined by the uniform norm. The uniform norm defines the topology of uniform convergence of functions on X .
|
wikipedia
|
continuous functions on a compact hausdorff space
|
{\displaystyle X.} The space C ( X ) {\displaystyle {\mathcal {C}}(X)} is a Banach algebra with respect to this norm. (Rudin 1973, §11.3)
|
wikipedia
|
function (mathematics)
|
In mathematical analysis, and more specifically in functional analysis, a function space is a set of scalar-valued or vector-valued functions, which share a specific property and form a topological vector space. For example, the real smooth functions with a compact support (that is, they are zero outside some compact set) form a function space that is at the basis of the theory of distributions. Function spaces play a fundamental role in advanced mathematical analysis, by allowing the use of their algebraic and topological properties for studying properties of functions. For example, all theorems of existence and uniqueness of solutions of ordinary or partial differential equations result of the study of function spaces.
|
wikipedia
|
constructive function theory
|
In mathematical analysis, constructive function theory is a field which studies the connection between the smoothness of a function and its degree of approximation. It is closely related to approximation theory. The term was coined by Sergei Bernstein.
|
wikipedia
|
power series ring
|
In mathematical analysis, every convergent power series defines a function with values in the real or complex numbers. Formal power series over certain special rings can also be interpreted as functions, but one has to be careful with the domain and codomain. Let f = ∑ a n X n ∈ R ] , {\displaystyle f=\sum a_{n}X^{n}\in R],} and suppose S {\displaystyle S} is a commutative associative algebra over R {\displaystyle R} , I {\displaystyle I} is an ideal in S {\displaystyle S} such that the I-adic topology on S {\displaystyle S} is complete, and x {\displaystyle x} is an element of I {\displaystyle I} . Define: f ( x ) = ∑ n ≥ 0 a n x n .
|
wikipedia
|
power series ring
|
{\displaystyle f(x)=\sum _{n\geq 0}a_{n}x^{n}.} This series is guaranteed to converge in S {\displaystyle S} given the above assumptions on x {\displaystyle x} . Furthermore, we have ( f + g ) ( x ) = f ( x ) + g ( x ) {\displaystyle (f+g)(x)=f(x)+g(x)} and ( f g ) ( x ) = f ( x ) g ( x ) .
|
wikipedia
|
power series ring
|
{\displaystyle (fg)(x)=f(x)g(x).} Unlike in the case of bona fide functions, these formulas are not definitions but have to be proved. Since the topology on R ] {\displaystyle R]} is the ( X ) {\displaystyle (X)} -adic topology and R ] {\displaystyle R]} is complete, we can in particular apply power series to other power series, provided that the arguments don't have constant coefficients (so that they belong to the ideal ( X ) {\displaystyle (X)} ): f ( 0 ) {\displaystyle f(0)} , f ( X 2 − X ) {\displaystyle f(X^{2}-X)} and f ( ( 1 − X ) − 1 − 1 ) {\displaystyle f((1-X)^{-1}-1)} are all well defined for any formal power series f ∈ R ] .
|
wikipedia
|
power series ring
|
{\displaystyle f\in R].} With this formalism, we can give an explicit formula for the multiplicative inverse of a power series f {\displaystyle f} whose constant coefficient a = f ( 0 ) {\displaystyle a=f(0)} is invertible in R {\displaystyle R}: f − 1 = ∑ n ≥ 0 a − n − 1 ( a − f ) n . {\displaystyle f^{-1}=\sum _{n\geq 0}a^{-n-1}(a-f)^{n}.} If the formal power series g {\displaystyle g} with g ( 0 ) = 0 {\displaystyle g(0)=0} is given implicitly by the equation f ( g ) = X {\displaystyle f(g)=X} where f {\displaystyle f} is a known power series with f ( 0 ) = 0 {\displaystyle f(0)=0} , then the coefficients of g {\displaystyle g} can be explicitly computed using the Lagrange inversion formula.
|
wikipedia
|
proper convex function
|
In mathematical analysis, in particular the subfields of convex analysis and optimization, a proper convex function is an extended real-valued convex function with a non-empty domain, that never takes on the value − ∞ {\displaystyle -\infty } and also is not identically equal to + ∞ . {\displaystyle +\infty .} In convex analysis and variational analysis, a point (in the domain) at which some given function f {\displaystyle f} is minimized is typically sought, where f {\displaystyle f} is valued in the extended real number line = R ∪ { ± ∞ } . {\displaystyle =\mathbb {R} \cup \{\pm \infty \}.}
|
wikipedia
|
proper convex function
|
Such a point, if it exists, is called a global minimum point of the function and its value at this point is called the global minimum (value) of the function. If the function takes − ∞ {\displaystyle -\infty } as a value then − ∞ {\displaystyle -\infty } is necessarily the global minimum value and the minimization problem can be answered; this is ultimately the reason why the definition of "proper" requires that the function never take − ∞ {\displaystyle -\infty } as a value. Assuming this, if the function's domain is empty or if the function is identically equal to + ∞ {\displaystyle +\infty } then the minimization problem once again has an immediate answer.
|
wikipedia
|
proper convex function
|
Extended real-valued function for which the minimization problem is not solved by any one of these three trivial cases are exactly those that are called proper. Many (although not all) results whose hypotheses require that the function be proper add this requirement specifically to exclude these trivial cases. If the problem is instead a maximization problem (which would be clearly indicated, such as by the function being concave rather than convex) then the definition of "proper" is defined in an analogous (albeit technically different) manner but with the same goal: to exclude cases where the maximization problem can be answered immediately. Specifically, a concave function g {\displaystyle g} is called proper if its negation − g , {\displaystyle -g,} which is a convex function, is proper in the sense defined above.
|
wikipedia
|
parameter
|
In mathematical analysis, integrals dependent on a parameter are often considered. These are of the form F ( t ) = ∫ x 0 ( t ) x 1 ( t ) f ( x ; t ) d x . {\displaystyle F(t)=\int _{x_{0}(t)}^{x_{1}(t)}f(x;t)\,dx.} In this formula, t is the argument of the function F, and on the right-hand side the parameter on which the integral depends.
|
wikipedia
|
parameter
|
When evaluating the integral, t is held constant, and so it is considered to be a parameter. If we are interested in the value of F for different values of t, we then consider t to be a variable. The quantity x is a dummy variable or variable of integration (confusingly, also sometimes called a parameter of integration).
|
wikipedia
|
bessel–clifford function
|
In mathematical analysis, the Bessel–Clifford function, named after Friedrich Bessel and William Kingdon Clifford, is an entire function of two complex variables that can be used to provide an alternative development of the theory of Bessel functions. If π ( x ) = 1 Π ( x ) = 1 Γ ( x + 1 ) {\displaystyle \pi (x)={\frac {1}{\Pi (x)}}={\frac {1}{\Gamma (x+1)}}} is the entire function defined by means of the reciprocal gamma function, then the Bessel–Clifford function is defined by the series C n ( z ) = ∑ k = 0 ∞ π ( k + n ) z k k ! {\displaystyle {\mathcal {C}}_{n}(z)=\sum _{k=0}^{\infty }\pi (k+n){\frac {z^{k}}{k!}}} The ratio of successive terms is z/k(n + k), which for all values of z and n tends to zero with increasing k. By the ratio test, this series converges absolutely for all z and n, and uniformly for all regions with bounded |z|, and hence the Bessel–Clifford function is an entire function of the two complex variables n and z.
|
wikipedia
|
mean-periodic function
|
In mathematical analysis, the concept of a mean-periodic function is a generalization of the concept of a periodic function introduced in 1935 by Jean Delsarte. Further results were made by Laurent Schwartz.
|
wikipedia
|
spaces of test functions and distributions
|
In mathematical analysis, the spaces of test functions and distributions are topological vector spaces (TVSs) that are used in the definition and application of distributions. Test functions are usually infinitely differentiable complex-valued (or sometimes real-valued) functions on a non-empty open subset U ⊆ R n {\displaystyle U\subseteq \mathbb {R} ^{n}} that have compact support. The space of all test functions, denoted by C c ∞ ( U ) , {\displaystyle C_{c}^{\infty }(U),} is endowed with a certain topology, called the canonical LF-topology, that makes C c ∞ ( U ) {\displaystyle C_{c}^{\infty }(U)} into a complete Hausdorff locally convex TVS. The strong dual space of C c ∞ ( U ) {\displaystyle C_{c}^{\infty }(U)} is called the space of distributions on U {\displaystyle U} and is denoted by D ′ ( U ) := ( C c ∞ ( U ) ) b ′ , {\displaystyle {\mathcal {D}}^{\prime }(U):=\left(C_{c}^{\infty }(U)\right)_{b}^{\prime },} where the " b {\displaystyle b} " subscript indicates that the continuous dual space of C c ∞ ( U ) , {\displaystyle C_{c}^{\infty }(U),} denoted by ( C c ∞ ( U ) ) ′ , {\displaystyle \left(C_{c}^{\infty }(U)\right)^{\prime },} is endowed with the strong dual topology.
|
wikipedia
|
spaces of test functions and distributions
|
There are other possible choices for the space of test functions, which lead to other different spaces of distributions. If U = R n {\displaystyle U=\mathbb {R} ^{n}} then the use of Schwartz functions as test functions gives rise to a certain subspace of D ′ ( U ) {\displaystyle {\mathcal {D}}^{\prime }(U)} whose elements are called tempered distributions. These are important because they allow the Fourier transform to be extended from "standard functions" to tempered distributions.
|
wikipedia
|
spaces of test functions and distributions
|
The set of tempered distributions forms a vector subspace of the space of distributions D ′ ( U ) {\displaystyle {\mathcal {D}}^{\prime }(U)} and is thus one example of a space of distributions; there are many other spaces of distributions. There also exist other major classes of test functions that are not subsets of C c ∞ ( U ) , {\displaystyle C_{c}^{\infty }(U),} such as spaces of analytic test functions, which produce very different classes of distributions. The theory of such distributions has a different character from the previous one because there are no analytic functions with non-empty compact support. Use of analytic test functions leads to Sato's theory of hyperfunctions.
|
wikipedia
|
multimedia
|
In mathematical and scientific research, multimedia is mainly used for modeling and simulation. For example, a scientist can look at a molecular model of a particular substance and manipulate it to arrive at a new substance. Representative research can be found in journals such as the Journal of Multimedia.
|
wikipedia
|
multimedia
|
One well known example of this being applied would be in the movie Interstellar where Executive Director Kip Thorne helped create one of the most realistic depictions of a blackhole in film. The visual effects team under Paul Franklin took Kip Thorne's mathematical data and applied it into their own visual effects engine called "Double Negative Gravitational Renderer" a.k.a. "Gargantua", to create a "real" blackhole, used in the final cut. Later on the visual effects team went onto publish a blackhole study
|
wikipedia
|
geometric function theory
|
In mathematical complex analysis, a quasiconformal mapping, introduced by Grötzsch (1928) and named by Ahlfors (1935), is a homeomorphism between plane domains which to first order takes small circles to small ellipses of bounded eccentricity. Intuitively, let f: D → D′ be an orientation-preserving homeomorphism between open sets in the plane. If f is continuously differentiable, then it is K-quasiconformal if the derivative of f at every point maps circles to ellipses with eccentricity bounded by K. If K is 0, then the function is conformal.
|
wikipedia
|
semi colon
|
In the calculus of relations, the semicolon is used in infix notation for the composition of relations: A ; B = { ( x , z ): ∃ y x A y ∧ y B z } . {\displaystyle A;B\ =\ \{(x,z):\exists y\ \ xAy\ \land \ yBz\}~.} The ; Humphrey point is sometimes used as the "decimal point" in duodecimal numbers: 54;612 equals 64.510.
|
wikipedia
|
isoelastic function
|
In mathematical economics, an isoelastic function, sometimes constant elasticity function, is a function that exhibits a constant elasticity, i.e. has a constant elasticity coefficient. The elasticity is the ratio of the percentage change in the dependent variable to the percentage causative change in the independent variable, in the limit as the changes approach zero in magnitude. For an elasticity coefficient r {\displaystyle r} (which can take on any real value), the function's general form is given by f ( x ) = k x r , {\displaystyle f(x)={kx^{r}},} where k {\displaystyle k} and r {\displaystyle r} are constants. The elasticity is by definition elasticity = d f ( x ) d x x f ( x ) = d ln f ( x ) d ln x , {\displaystyle {\text{elasticity}}={\frac {df(x)}{dx}}{\frac {x}{f(x)}}={\frac {d{\text{ln}}f(x)}{d{\text{ln}}x}},} which for this function simply equals r.
|
wikipedia
|
solomon mikhlin
|
In mathematical elasticity theory, Mikhlin was concerned by three themes: the plane problem (mainly from 1932 to 1935), the theory of shells (from 1954) and the Cosserat spectrum (from 1967 to 1973). Dealing with the plane elasticity problem, he proposed two methods for its solution in multiply connected domains. The first one is based upon the so-called complex Green's function and the reduction of the related boundary value problem to integral equations. The second method is a certain generalization of the classical Schwarz algorithm for the solution of the Dirichlet problem in a given domain by splitting it in simpler problems in smaller domains whose union is the original one.
|
wikipedia
|
solomon mikhlin
|
Mikhlin studied its convergence and gave applications to special applied problems. He proved existence theorems for the fundamental problems of plane elasticity involving inhomogeneous anisotropic media: these results are collected in the book (Mikhlin 1957). Concerning the theory of shells, there are several Mikhlin's articles dealing with it.
|
wikipedia
|
solomon mikhlin
|
He studied the error of the approximate solution for shells, similar to plane plates, and found out that this error is small for the so-called purely rotational state of stress. As a result of his study of this problem, Mikhlin also gave a new (invariant) form of the basic equations of the theory. He also proved a theorem on perturbations of positive operators in a Hilbert space which let him to obtain an error estimate for the problem of approximating a sloping shell by a plane plate.
|
wikipedia
|
solomon mikhlin
|
Mikhlin studied also the spectrum of the operator pencil of the classical linear elastostatic operator or Navier–Cauchy operator A ( ω ) u = Δ 2 u + ω ∇ ( ∇ ⋅ u ) {\displaystyle {\boldsymbol {\mathcal {A}}}(\omega ){\boldsymbol {u}}=\Delta _{2}{\boldsymbol {u}}+\omega \nabla \left(\nabla \cdot {\boldsymbol {u}}\right)} where u {\displaystyle u} is the displacement vector, Δ 2 {\displaystyle \scriptstyle \Delta _{2}} is the vector laplacian, ∇ {\displaystyle \scriptstyle \nabla } is the gradient, ∇ ⋅ {\displaystyle \scriptstyle \nabla \cdot } is the divergence and ω {\displaystyle \omega } is a Cosserat eigenvalue. The full description of the spectrum and the proof of the completeness of the system of eigenfunctions are also due to Mikhlin, and partly to V.G. Maz'ya in their only joint work.
|
wikipedia
|
kelly criterion
|
In mathematical finance, if security weights maximize the expected geometric growth rate (which is equivalent to maximizing log wealth), then a portfolio is growth optimal. Computations of growth optimal portfolios can suffer tremendous garbage in, garbage out problems. For example, the cases below take as given the expected return and covariance structure of assets, but these parameters are at best estimates or models that have significant uncertainty.
|
wikipedia
|
kelly criterion
|
If portfolio weights are largely a function of estimation errors, then Ex-post performance of a growth-optimal portfolio may differ fantastically from the ex-ante prediction. Parameter uncertainty and estimation errors are a large topic in portfolio theory. An approach to counteract the unknown risk is to invest less than the Kelly criterion (e.g., half).
|
wikipedia
|
doob decomposition theorem
|
In mathematical finance, the Doob decomposition theorem can be used to determine the largest optimal exercise time of an American option. Let X = (X0, X1, . . .
|
wikipedia
|
doob decomposition theorem
|
, XN) denote the non-negative, discounted payoffs of an American option in a N-period financial market model, adapted to a filtration (F0, F1, . . .
|
wikipedia
|
doob decomposition theorem
|
, FN), and let Q {\displaystyle \mathbb {Q} } denote an equivalent martingale measure. Let U = (U0, U1, . .
|
wikipedia
|
doob decomposition theorem
|
. , UN) denote the Snell envelope of X with respect to Q {\displaystyle \mathbb {Q} } . The Snell envelope is the smallest Q {\displaystyle \mathbb {Q} } -supermartingale dominating X and in a complete financial market it represents the minimal amount of capital necessary to hedge the American option up to maturity.
|
wikipedia
|
doob decomposition theorem
|
Let U = M + A denote the Doob decomposition with respect to Q {\displaystyle \mathbb {Q} } of the Snell envelope U into a martingale M = (M0, M1, . . .
|
wikipedia
|
doob decomposition theorem
|
, MN) and a decreasing predictable process A = (A0, A1, . . .
|
wikipedia
|
doob decomposition theorem
|
, AN) with A0 = 0. Then the largest stopping time to exercise the American option in an optimal way is τ max := { N if A N = 0 , min { n ∈ { 0 , … , N − 1 } ∣ A n + 1 < 0 } if A N < 0. {\displaystyle \tau _{\text{max}}:={\begin{cases}N&{\text{if }}A_{N}=0,\\\min\{n\in \{0,\dots ,N-1\}\mid A_{n+1}<0\}&{\text{if }}A_{N}<0.\end{cases}}} Since A is predictable, the event {τmax = n} = {An = 0, An+1 < 0} is in Fn for every n ∈ {0, 1, .
|
wikipedia
|
doob decomposition theorem
|
. . , N − 1}, hence τmax is indeed a stopping time. It gives the last moment before the discounted value of the American option will drop in expectation; up to time τmax the discounted value process U is a martingale with respect to Q {\displaystyle \mathbb {Q} } .
|
wikipedia
|
kimeme
|
In mathematical folklore, the no free lunch theorem (sometimes pluralized) of David Wolpert and William G. Macready appears in the 1997 "No Free Lunch Theorems for Optimization. "This mathematical result states the need for a specific effort in the design of a new algorithm, tailored to the specific problem to be optimized. Kimeme allows the design and experimentation of new optimization algorithms through the new paradigm of memetic computing, a subject of computational intelligence which studies algorithmic structures composed of multiple interacting and evolving modules (memes).
|
wikipedia
|
minus-plus sign
|
In mathematical formulas, the ± symbol may be used to indicate a symbol that may be replaced by either the plus and minus signs, + or −, allowing the formula to represent two values or two equations.If x2 = 9, one may give the solution as x = ±3. This indicates that the equation has two solutions: x = +3 and x = −3. A common use of this notation is found in the quadratic formula x = − b ± b 2 − 4 a c 2 a , {\displaystyle x={\frac {-b\pm {\sqrt {b^{2}-4ac}}}{2a}},} which describes the two solutions to the quadratic equation ax2+bx+c = 0. Similarly, the trigonometric identity sin ( A ± B ) = sin ( A ) cos ( B ) ± cos ( A ) sin ( B ) {\displaystyle \sin(A\pm B)=\sin(A)\cos(B)\pm \cos(A)\sin(B)} can be interpreted as a shorthand for two equations: one with + on both sides of the equation, and one with − on both sides.
|
wikipedia
|
minus-plus sign
|
+ x 5 5 ! − x 7 7 !
|
wikipedia
|
minus-plus sign
|
+ ⋯ ± 1 ( 2 n + 1 ) ! x 2 n + 1 + ⋯ . {\displaystyle \sin \left(x\right)=x-{\frac {x^{3}}{3!
|
wikipedia
|
minus-plus sign
|
}}+{\frac {x^{5}}{5! }}-{\frac {x^{7}}{7! }}+\cdots \pm {\frac {1}{(2n+1)!
|
wikipedia
|
minus-plus sign
|
}}x^{2n+1}+\cdots ~.} Here, the plus-or-minus sign indicates that the term may be added or subtracted depending on whether n is odd or even; a rule which can be deduced from the first few terms. A more rigorous presentation would multiply each term by a factor of (−1)n, which gives +1 when n is even, and −1 when n is odd.
|
wikipedia
|
minus-plus sign
|
In older texts one occasionally finds (−)n, which means the same. When the standard presumption that the plus-or-minus signs all take on the same value of +1 or all −1 is not true, then the line of text that immediately follows the equation must contain a brief description of the actual connection, if any, most often of the form "where the ‘±’ signs are independent" or similar. If a brief, simple description is not possible, the equation must be re-written to provide clarity; e.g. by introducing variables such as s1, s2, ... and specifying a value of +1 or −1 separately for each, or some appropriate relation, like s 3 = s 1 ⋅ ( s 2 ) n , {\displaystyle s_{3}=s_{1}\cdot (s_{2})^{n}\,,} or similar.
|
wikipedia
|
bernstein algebra
|
In mathematical genetics, a genetic algebra is a (possibly non-associative) algebra used to model inheritance in genetics. Some variations of these algebras are called train algebras, special train algebras, gametic algebras, Bernstein algebras, copular algebras, zygotic algebras, and baric algebras (also called weighted algebra). The study of these algebras was started by Ivor Etherington (1939). In applications to genetics, these algebras often have a basis corresponding to the genetically different gametes, and the structure constant of the algebra encode the probabilities of producing offspring of various types. The laws of inheritance are then encoded as algebraic properties of the algebra. For surveys of genetic algebras see Bertrand (1966), Wörz-Busekros (1980) and Reed (1997).
|
wikipedia
|
End of preview. Expand
in Data Studio
README.md exists but content is empty.
- Downloads last month
- 7