Affine theory deals with 2 sets: One which elements are called *locations* and another, which elements are called *directions*. The *directions* form an additive group. Every additive group carries a derived composition between its elements and scalars. It is defined recursively as (composition symbol implied). Greek letters will be reserved for scalar names and the upper case letters for location names and arrow-ed upper case letters for direction names. In both cases, lower-case indicates a set-of-elements.

Directional expressions , are called weighted sums. Similar expressions can be defined for *locations* but some restrictions apply: A difference of *locations* is a *direction* called . The following holds

- .
- .

The last expression can be generalized to include all weighted sums . On the other hand, if the sum of weights is nonzero, this expression is not defined. However, in that case we postulate existence of a unique element . This element will be denoted by . A *direction* can be added to a *location* to produce another *location*. Indeed, if then for any we can define . is called the *translation* of the location .

Affine subsets which always includes locations produced by weighted sums of its elements are called *spans*. The set

of elements is the smallest span containing . It is denoted by . The set is called *independent* if removal of any of its elements results in a smaller span. If is spanned by an independent set, then 1 less of set’s cardinality is called the dimension of the span and is denoted by . Spans of dimension 1 are called lines, those of dimension 2 – planes and those of dimension 1-less than that of the entire Affine Space – hyperplanes. If are independent, the set is called the simplex with vertices and is denoted by . is called a segment. is called a triangle. is called a tetrahedron. The weighted sum is called the center of . The special case of is called the mid-point. The concept of independence and span carries over verbatim to the set of *directions* simply by removing the restrictions on the sum of weights. For any span , the set of differences is called its associated span of directions. Affine spans are called parallel if its associated spans of directions coincide. If is the entire Affine space and is a fixed line of directions, then the lines for which is the line of directions are mutually parallel. This subset is again an affine space which we call the quotient of and denote by . Note that .

For any pair of directions , their *Scalar Product* is a numer denoted by . The rules are the same as those for the ordinary multiplications of numbers. In particular, abbreviates to . The expression is called the norm and is denoted by . For a fixed independent set the squared norm of a weighted sum is a homogeneous polynomial of degree 2. For a convenient selection of the independent set that polynomial can be written as . A selection of signs is called the signature of the norm and is denoted by a pair . We consider constant function a norm with signature . A line can have norm, a plane – norms, a space – norms and a 4d space – norms. The signature of a norm restricted to a subspace has one or both of the counters decremented. Norms with signature are called Euclidean and those with signature are called Minkowskian. Directions with zero norm are called *isotropic*. Euclidean norm has no isotropic directions other than . In Minkowskian norm the set of isotropic directions is called the *isotropic cone*. When the norm is Euclidean, the expression is called the distance separating . If the norm is Minkowskian that expression is called the interval. In Euclidean space restrictions of the norm to a subspace is also Euclidean. In Minkowskian space that restriction is Euclidean if the subspace does not contain nonzero isotropic directions and is Minkowskian otherwise.

A Field is a set which elements are called Scalars. 2 scalars can be combined in 2 ways, One is called their sum and is denoted by . Another is called their product and is denoted by (with composition symbol implied). Both compositions are associative and commutative and both have neutral elements called zero and one and denoted by respectively. Any element has an additive inverse denoted by and, if non-zero, a multiplicative inverse denoted by . In short, addition is a commutative group and multiplication restricted to non-zero elements is also a commutative group. Multiplication is distributive relative to addition. Term *additive group* is a common abbreviation for a commutative group which uses +,-,0 symbols as above.

Assume, for simplicity, that we live in a Kingdom which currency is a single denomination golden coin. Wealth in this case is just the count of coins in one’s possession. An example of addition would be combination of wealth by marriage of 2 families or by a merger of 2 corporations. The combined wealth will be the same if the attribution of wealth between the 2 parties is reversed. In other words the composition is commutative. A penniless bride contributes nothing – has 0 wealth. If a marriage results in 0-wealth it means that one’s family debt was negative of the other’s family wealth. Assume that a bank promises to double your money in a year. We can treat the resulting wealth as a composition of 2 amounts:

- amount with which the bank promises to replace every coin in your account (2 in this case)
- initial wealth.

This is how we define multiplication. Scalars can be thought of as elements of a set of all potential instances of wealth. You can never go broke by acquiring additional funds. It is an experimental fact that cannot be derived by logical reasoning from the *fundamental rules* of a field. It is in fact theoretically possible for a field to be finite. Mathematical theory that correctly describes wealth is that of an infinite field.

Mathematics is a language. Like any other language it is written by combining characters into words and words into grammatical sentences. In this post we use plain English to describes this grammar and situations where the language can be useful.

Mathematics make statements about elements of a *Set*. It does not matter what *Sets* are exactly. What matters is presence of one or many *compositions*. A *Compositon* is ability to use elements, to produce references to other elements. *Compositions* can be unary, binary, ternary, e.t.c. depending on how many elements we start with. A Mathematical statement is an assertion that a given element can be referenced by applying *compositions* in different ways. Few of these statements are designated as *fundamental rules*. Others are derived from these rules by logical reasoning. To formulate a statement we first assign names (labels) to the elements involved. By convention, the names are short, often 1-letter long, with optional numeric subscripts or accents. The statement is a pair of expressions separated by the = sign. Each expression is a sequence of element names separated by some fixed, non-alphanumeric, symbols identifying *compositions* and delimited by parentheses to clarify the order in which the compositions are applied. Sometimes the symbol and/or the parentheses can be inferred from the context and skipped. For example we may agree that one composition has precedence over another. The number of compositions used in a particular theory is usually quite small and the selection of the symbol often carries a hint about the *fundamental rules* in-effect. Some *fundamental rules* are valid universally. Sometimes we postulate presence of ‘special’ elements or subsets and of certain rules that apply only to them. Sometimes we may derive additional ways to combine elements stated in terms of the *compositions* specified initially. We may then assign additional symbols to these ‘secondary’ compositions.

Sometimes we are guided by simplicity and elegance. Here are the most common examples of *fundamental rules* used in binary compositions

- commutativity
- Postulates that the order in which elements are written does not matter
- associativity
- Postulates that placements of parentheses does not matter
- neutral element
- Postulates presence of an element which, when used in the composition, does not produce anything new. (example of a ‘special’ element)
- inverse element
- Postulates that when an element is a result of the composition then the knowledge of one of the elements used implies the knowledge of the second. If the result of the composition is fixed to be the neutral element then that second element is called the inverse of the first (example of a derived unary composition)
- distributivity
- Postulates that, if we are dealing with 2 compositions, it does not matter if we first combine 2 elements using one rule and then combine the result with a third element or if we first combine the third element with each of the 2 elements using the second rule and then combine the results using the first rule
- group
- Shorthand for a combination of the following postulates: composition is associative, has a neutral element and every element has an inverse.

Sometimes we take a cue from Real Life. We may encounter a category of objects which allow us to manipulate them to produce other objects. Applying these manipulations in different order may sometimes produce identical outcome. A statement to that effect is an experimental fact, a law of Nature if you will. But that statement, written more concisely using letters and symbols as described above, can be used to define a mathematical theory. We then refer to the objects as if they were elements of a *set*. There is a difference of emphasis. In an experiment often new objects are produced. In Mathematics, the result of a composition is a reference to an element already there. It is as if the *set*, which the theory talks about, is an outcome of unlimited applications of experimental compositions.

In the following we will show few examples. In each we will start with formal definition of a theory and then give examples of Natural compositions which satisfy *fundamental rules* of the theory.

Here we talk primarily about JavaScript projects but include Java as well. The major tools at our disposal here are Git for source control, Maven and Npm for dependency/task management of Java and JavaScript respectively. Java is compiled and packaged by JDK while in JavaScript world these tasks are performed by WebPack and Babel.

There are alternatives but these will will not be discussed here, due to author’s limitations.

All tools listed above operate on files in a directory tree and most have their configuration defined by a file placed at a root of the tree. The root-level configuration is often supplemented by a global setup file in user’s home or system environment variable. Any project development except the most trivial ones consists of multiple inter-related sub-modules, some local and some external, each designed for a specific functionality. In Maven, the dependencies are managed outside the project directory structure and there is no difference between an external and local dependency. On the other hand, Npm manages the dependencies within the project’s structure (./node_modules)

And here comes a major difference between Npm and Maven. In Maven, the physical location of a project sub-module does not matter. The *mvn install* command registers the sub-module in the local cache and allows it to be referenced by any other sub-module, in the very same way we reference any 3rd party library.

Code

Redux tutorial

Webpack intro

A dual-deploy project can be served in a NODE HTTP server as well as a SERVLET container such as Tomcat. The only NODE-specific part of the project is the landing page (index.html). The SERVLET container will supply its own landing page.

These instructions assume NPM 5.6.0.

[project-root]

package.json

pom.xml

webpack.config.js

[src]

index.js

App.jsx

App.scss

[public]

index.html

Contains project description for NPM.

Created with the interactive command

npm init

This file contains the following groups of information

- Name and other information which helps identify the project
- Run time dependencies
- Dev dependencies
- NPM scripts

The first 2 are project-specific and the last 2 are not.

Dev dependencies may be broken into categories

- webpack support (webpack, webpack-cli, webpack-dev-server)
- babel support (babel-core, babel-loader, bable-preset-env, babel-preset-stage-0, babel-react)
- css/font support (autoprefixer, css-loader, file-loader, mini-css-extract-plugin, node-sass, postcss-loader, resolve-url-loader, sass-loader, style-loader, url-loader)

Dependencies to 3rd parties are added with the command

npm install -D [pkg-name]

A source directory of a project can be exposed as an NPM dependency for a sibling project. This can be accomplished by defining a name-only package.json in the directory that is to be exposed and manually adding the dependency in the referencing project

“[local-project-name]”: “file:[path-to-local-project]”

If the referenced source needs babel then additional steps are required

- Babel source directories needs to be included explicitly (you cannot just exclude node_modules)
- Babel source listings must be physical directories rather than symbolic links from node_modules
- Alias needs to be created for React to prevent webpack from including multiple copies

Contains project description for MVN. It is needed to

- Allow MVN initiate NPM build
- Combine output of NPM build into a servlet resource JAR

Except for the project name this file contains nothing that is specific to a particular project.

Created with the command

mvn archetype:generate -DgroupId=[my-comp] -DartifactId=[my-proj] -DarchetypeArtifactId=maven-archetype-simple

-Dinteractive=false

Contains project description for webpack and babel. It is not project-specific except for listing of local sources used by babel.

]]>If is a vector space then any form extends to a unique anti-derivation of the exterior algebra , i.e.,

for homogenous

If is a basis of V, the dual basis and if denotes we have the following action on monomials

(The sign is negative if the number of terms preceding the matching one is odd)

Orthogonal Group is a symmetry group of a vector space equipped with a (non-degenerate) scalar product that is a group of linear transformations such that . If we present as a matrix with respect to some vector basis then is orthogonal when its coefficients satisfy certain algebraic equations which depend on how the basic vectors were selected. For example, if these were ortho-normal then the orthogonality condition can be written as which simply states that the columns of form an ortho-normal basis. Applying the log to we obtain an infinitesimal variant of this relation (known as anti-symmetry property). In general, if is a matrix defined by scalar products between basic vectors (invertible by non-degenerate assumption) then the matrix of scalar products between column vectors of is the matrix . The orthogonality condition states that this matrix coincides with . This can be written as and the corresponding infinitesimal condition as .

This is a simple Lie algebra of rank and type when is even and of rank and type when is odd. Recall the following standard representation of a root system in terms of an orthonormal basis .

where simple roots are defined by

The relation between orthogonal Lie algebra and its root system is clear if instead of ortho-normal we use a Witt basis. The resulting root system will be of type in odd-dimensions and of type in even dimensions. In the even-dimensional case this basis is present if the vector space decomposes into a direct sum of isotropic spaces. The Witt basis is then where scalar products are 0 for all pairs except . In this case is the matrix with 1’s on the co-diagonal and 0’s elsewhere and is the reflection of with respect to the co-diagonal. We have a Cartan algebra consisting of diagonal matrices with the above symmetry restriction. This algebra has a convenient basis . Its dual basis is then an orthonormal basis used in the standard representation of above. The root-space decomposition is given by

where .

The name ‘Orthogonal Group’ comes form traditional emphasis on the metric-preserving representation. From a purely group-theoretic point of view, it is just one of the fundamental representations. In this section we will examine another fundamental representation of the ‘Orthogonal Group’ not related to orthogonality. The underlying vector space will be the exterior algebra . The transformations will be specified by providing a set of vector-space generators. Namely, composites of 2 members where each is either an anti-derivation related to an element of or an exterior multiplication by an element of . This representation decomposes into a sum of 2 fundamental representations .

A convenient Cartan-subalgebra basis is given by . Its dual basis is then an orthonormal basis used in the standard representation of above. The root-space decomposition is given by

where .

]]>V. Kac Lecture Notes

Samelson Lecture Notes

Baez Octonions

Zuber Home

Stienstra Lecture Notes on Hypergeometric Structures ]]>