[go: up one dir, main page]

  home | index | units | counting | geometry | algebra | trigonometry | calculus | functions
analysis | sets & logic | number theory | recreational | misc | nomenclature & history | physics
 border

Final Answers
© 2000-2023   Gérard P. Michon, Ph.D.

Matrices and
Determinants

 Michon
 

Related articles on this site:

Related Links (Outside this Site)

The Matrix Reference Manual  by  Mike Brookes
The Fundamental Theorem of Linear Algebra (1993-)  by  Gil Strang (1934).
Toeplitz Matrices  by  Dario Bini.
Toeplitz Determinants and Spin Correlations  by  Tai Tsun Wu  (1966).
MathWorld (Eric Weisstein):   Hankel Matrix
MathPages (Kevin Brown):   The Resultant and Bezout's Theorem
A Library of Hadamard Matrices  by  N.J.A. Sloane.
Complex Hadamard Matrices  by  Karol Zyczkowski and Wojciech Tadej.
The Hadamard maximal determinant problem.

Wikipedia:   Determinant   |   List of Matrices   |   Levinson-Durbin algorithm

Videos :

The Rise and Fall of Matrices (42:37)  by  Walter Ledermann  (LMS, 1986).
Gaussian elimination  [ Part 1  |  Part 2 ]   by  Norman J. Wildberger  (2010).
What's a [symmetric] determinant? (2:50)  James Schloss  (LeiosOS, 2016-07-23).
The determinant, essence of linear algebra (10:02) Grant Sanderson (2016-08-10).

 
border
border

Linear Algebra:  Matrices and Determinants


(2006-12-26)   Identity (I), Permutation Matrices, Exchange Matrix (J)
The various permutations of the columns of the identity (I).

Those square matrices whose elements vanish except for one unit element per row and per column are known as  permutation matrices.  Each corresponds to a permutation of the columns of the identity matrix (I) and they form collectively a multiplicative group isomorphic to the symmetric group on a set of n elements.

One particular permutation matrix is the so-called exchange matrix (J) which is the "mirror image" of the identity matrix itself.  For example:

I6   =     bracket
bracket
bracket
 1    0    0    0    0    0   bracket
bracket
bracket
 0    1    0    0    0    0  
 0    0    1    0    0    0  
 0    0    0    1    0    0  
 0    0    0    0    1    0  
 0    0    0    0    0    1  
J6   =     bracket
bracket
bracket
 0    0    0    0    0    1   bracket
bracket
bracket
 0    0    0    0    1    0  
 0    0    0    1    0    0  
 0    0    1    0    0    0  
 0    1    0    0    0    0  
 1    0    0    0    0    0  
 

In spite of this superficial finite-dimensional resemblance with the identity matrix, there's no nice generalization of the exchange matrix to infinitely many dimensions.


(2023-05-31)   Gil Strang's Matrix
The favorite square matrix of professor W. Gilbert Strang (MIT).

S6   =     bracket
bracket
bracket
 2    -1    0    0    0    0   bracket
bracket
bracket
 -1    2    -1    0    0    0  
 0    -1    2    -1    0    0  
 0    0    -1    2    -1    0  
 0    0    0    -1    2    -1  
 0    0    0    0    -1    2  

Of particular interest is Strang's matrix  S11  because it has  121  elements...

Favorite Matrix


(2007-02-13)   Defining matrix operations from operations on elements
Dirac's notation is convenient to express the elements of a matrix A.

The  scalar  at row  i  and column  j  in matrix A, is often denoted  aij

Similar  ad hoc  notations are adequate when juggling with just a few distinct matrices, in spite of the fact that the name of the doubly-subscripted symbol is not directly related to the name of the matrix itself  (A)...

That flaw is avoided with the nice self-contained notation  (due to P.A.M. Dirac)  shown on the right-hand side of the following equation:

aij   =   < i | A | j >

This is sometimes called the  element  of  A  between  i and j.  Abstract meanings are given to separate parts of that notation:  | j >  is called a  ket  (or  column vector)  in the canonical orthonormal  basis  and  < i |  is the dual of such a thing  (it's called a  bra  or  row vector).

Originally,  the words  bra  and  ket  where just the two halves of the word  bracket,  a mnemonic invented by Dirac as a reminder that the row-vector comes before the column-vector in the bracket notation normally used to denote the inner product which forms a  scalar.  (with the opposite order, you'd form a square matrix instead).

These interpretations are important and do repay study, but they're not a prerequisite to use the notation as a whole.  For example, the following equation  defines  the sum  A+B  of two matrices A and B by giving every element of it,  in terms of a simple addition of scalars :

< i | A+B | j >   =   < i | A | j >  +  < i | B | j >

More interestingly,  here's how to  define  the product  A B  of two matrices:

< i | A B | j >   =   åk  < i | A | k >  < k | B | j >

Jacques Binet
Jacques Binet
 

That general rule for matrix multiplication was made explicit in 1812  (using more traditional index notations)  by the French mathematician Jacques Binet (1786-1856; X1804)  who is also remembered for two utterly unrelated things,  both known as  Binet's formulas :
 
X 1)   Explicit expression  for the n-th term of a sequence obeying a linear recurrence relation  (e.g.,  the Fibonacci sequence).
2)   In celestial mechanics, Binet's formulas allow an easy proof that the orbit of two gravitating bodies must be a conic section.

If we denote by  z*  the conjugate  a-ib  of the  complex  number  z = a+ib,  (z=z* when z is a real number)  then we may define as follows the  conjugate transpose  A*  of the complex matrix  A:

" i ,  " j ,   < i | A* | j >   =   < j | A | i >*

This operation is far more important than transposition without conjugation (which is virtually useless in the complex realm).  With respect to A,  the matrix  A*  is variously called  conjugate transpose, Hermitian conjugate, adjoint, Hermitian adjoint  or  dual.  (It's called transpose for real matrices.)

Here's how to prove for matrices the relation which gives the adjoint of a product:  (A B)* =  BA*  (this holds for whatever is called  adjoint  or  dual  among other  multiplicative  objects for which the term is defined).

< i | (A B)* | j >     =   < j | A B | i >*
  =   Sk  ( < j | A | k >  < k | B | i > )*
  =   Sk  < j | A | k >*  < k | B | i >*
  =   Sk  < k | A* | j >  < i | B* | k >
  =   Sk  < i | B* | k >  < k | A* | j >
  =   < i | B* A* | j > 


(2019-01-29)   Triangular Matrices

The concepts we introduced in the  previous  section are  natural  in the sense that they do not depend on the structure of the  set  used to index matrix coefficients.  In particular,  any  order  between distinct indices was utterly irrelevant in those definitions.

Incidentally,  this is true because we only consider matrices of finite dimension over a  ring  of coefficeints where addition is required to be commutative.  If either condition fails,  the summation symbol  S  (subscripted by the dummy name of the summation index)  is no longer well-defined.  Even with a commutative addition,  an infinite sum  could depend  on the ordering of indices.

The identity matrix  I  has a natural definition in this sense while  other permutation matrices  do not  (including mirror-image of  I  previously featured).

 Come back later, we're 
 still working on this one...

Transposition is one obvious  anti-isomorphism  between  upper-triangular matrices  and  lower-triangular matrices.  A straight isomorphism can be defined using any  involution  s  among the set of indices.  In dimension  n+1,  if we use the integers from  0  to  n  as our set of indices,  s(i) = n-i  will do.  The induced involution over the set of matrices is defined by:

< i | s(A) | j >   =   < s(j) | A | s(i) >

So defined,  s  is a ring isomorphism between  upper-triangular matrices  and  lower-triangular matrices.


(2010-08-10)   Determinant of a Matrix   (Seki, 1683)
It's defined as a  completely antisymmetrical  multilinear form.

In a vector space of dimension  n,  a  completely antisymmetrical  form is a scalar-valued function of n vectors, which is linear with respect to every argument and has the property of being unchanged by any  even  permutation of the arguments and negated by any  odd  one.

Even and odd permutations have signature +1 and -1, respectively.  Permutations are  bijective  functions of a finite set onto itself.  The signature of a function  s  which is not bijective is zero  ( sgn(s) = 0).

In dimension 3 for example, such a function  f  would be trilinear and verify:

f ( v1 , v2 , v3 )   =   f ( v2 , v3 , v1 )   =   - f ( v2 , v1 , v3 )

A key observation is that all such functions are proportional to each other. 

 Come back later, we're 
 still working on this one...

Leibniz's formula entails a sum of  n!  terms, each a product of n factors:

   n 
det ( A )   =    å   sgn ( s Õ   < i | A | s(i) >
 Sn  i = 1 


(2012-09-18)   Minors  &  Cofactors
minor  is a determinant obtained by deleting n rows and n columns.

The number of deleted rows and columns can be zero  (the whole determinant is a minor of itself).  An n-th minor is obtained by deleting n rows and n columns.  Of particular interest are the  first minors  of a determinant, which are obtained by deleting one row  (the i-th one)  and one column  (the j-th one).

 Come back later, we're 
 still working on this one...

Wikipedia :   Minor   |   Cofactor   |   Laplace expansion


(2010-08-12)   Adjugate  (classic adjoint):  Tranpose of the  cofactors.
The adjugate of  A  verifies   adj(AA  =  A  adj(A)  =  det(AI

The  adjugate  adj(A)  of a matrix  A  may no longer be called its "adjoint",  which is now a synonym of  transpose conjugate  (denoted  A*  or  Adagger ).

A matrix  A  is invertible iff it has nonzero determinant, in which case:

A-1   =   adj(A) / det(A)

As the determinant of the matrix  A  is an homogeneous polynomial of its elements, it can be written uniquely in the following way, where the polynomials Aij and Bij  do not depend on the element  aij.

det(A)   =   a ij A ij  +  B ij

By definition,  A ij  is the  cofactor  of  a ij

 Come back later, we're 
 still working on this one...


(2010-08-12)   Eigenvectors and eigenvalues
The image of an eigenvector is proportional to it.

 Come back later, we're 
 still working on this one...

Eigenvalue Problems  (in Sixty Symbols)  by  Seamus Garvey.


(2019-11-23)   Numerical Range
A compact  convex  set associated to  any  complex matrix.

For any  complex  matrix  A  (Hermitian or not)  the  Rayleigh quotient  (or  Rayleigh-Ritz ratio)  at a nonzero point  x  is the following ratio:

x* A  x
Vinculum
x* x

If  A  is a quantum  observable,  that quantity is its  expected value  in the quantum state descbribed by  ket  x.

The  range  of all Rayleigh quotients is called the  numerical range  of  A.  It clearly contains the  spectrum  of  A  (i.e.,  all  eigenvalues  of  A).

 Come back later, we're 
 still working on this one...

Rayleigh quotient   |   Numerical range   |   Product numerical range   |   Spectrum   |   Spectral theory
Sturm-Liouville theory (1834)   |   Blaschke products (1915)   |   Wilhelm Blaschke (1885-1962)
Poncelet's_closure theorem   |   Jean-Victor Poncelet (1788-1867; X1807)
 
Finding Ellipses  by  Ulrich Daepp, Pamela Gorkin, Andrew Shaffer, Karl Voss  (MAA Press, 2018-10-30)


(2010-08-12)   Characteristic Polynomial  P  of a square matrix
It's independent on the choice of a vectorial basis.

 Come back later, we're 
 still working on this one...

Trace :

The  trace  of a matrix is the sum of the elements in its diagonal:

 n 
tr ( A )   =    å   < i | A | i >
 i = 1 

It's also the coefficient of  (-x)  in the characteristic polynomial of the operator.  As such, the  trace  is an intrinsic value associated to the operator itself, irrespective of the choice of a coordinate basis.  So, we may talk freely about the  trace of an operator  (or the trace of a tensor).

The  trace  function is a  linear function.  Therefore,  its  kernel  is a  vector space,  consisting of  traceless matrices,  which play an important rôle in quantum theory:  The  groups  SU(2) and SU(3)  Lie  groups are represented by  traceless  matrices,  so are their respective standard bases,  the  three Pauli matrices  and the  eight Gell-Mann matrices.


(2017-12-03)   Minimal Polynomial  Q  of a Square Matrix
A polynomial  Q  easy to obtain from the characteristic polynomial  P.

Defining  Q  as the least monic polynomial havind the same roots as P,  we obtain it as  P/R  where  R  is the greatest (monic) common divisor of  P  and its derivative  P'.

It's not even necessary to know the roots of  P  to perform that calculation.

The characteristic polynomial  P  is  irreducible  when it's equal to the minimal polynomial  Q.


(2010-08-12)   Cayley-Hamilton Theorem   (1855, 1858, 1878)
Any matrix is a root of its own characteristic polynomial.

That general theorem was first stated in 1858 by Arthur Cayley who only checked it for 3-dimensional  matrices.  A special case  (3-dimensional rotations)  had been stated by Hamilton three years earlier.  The first  general proof  was published in 1878 by  Georg Frobenius (1849-1917).

As the coefficients of the characteristic polynomial of a matrix  A  are polynomial functions of its elements, the assertion states that a particular set of  n2  polynomials functions  (of degree n)  in  n2  variables vanishes.

To prove that the coefficients of all the relevant polynomials vanish, it suffices to establish that the associated polynomial functions vanish on a sufficiently large set of points.  In particular, a  dense  set of points will do  (the zeroes of any continuous function form a closed set).

For an  n-dimensional vector space over complex numbers, the set of matrices with distinct eigenvalues is dense among all square matrices. 

 Come back later, we're 
 still working on this one...

Algebraic Number Fields as Matrix Algebras  by  Kapil Hari Paranjape  (2002-10-20).
 
Wikipedia :   Cayley-Hamilton theorem


(2020-09-13)   Diagonalization of  normal  matrices.
A matrix  A  is said to be  normal  when  A A*  =  A* A

Two matrices  A  and  B  are said to be  similar  when they are related by an  inner isomorphism,  namely there's an invertible matrix  P  such that:

B   =   P  A  P-1

A matrix is said to be diagonalizable if it's similar to a diagonal matrix.

 Come back later, we're 
 still working on this one..when.

Diagonalizable matrix   |   Normal matrix   |   Spectral theorem


(2015-04-03)   Diagonalization and Jordan Normal Form of  M
For some S, the nondiagonal elements of  S M S-1  are off-diagonal units.

 Come back later, we're 
 still working on this one...

Jordan matrix   |   Jordan normal form   |   Camille Jordan (1838-1922; X1855)


(2017-12-03)   Jordan-Chevalley-Dunford  decomposition:  M = D+N
Chevalley's algorithm  finds the  unique  decomposition of a matrix into the sum of its  diagonalizable  and  nilpotent  parts,  which  commute.

History :

Although Camille Jordan published an early form of it in 1870,  that result took a full century to make its way into a graduate textbook  (Dunford & Schwartz, 1971).  It's now common knowledge among graduate students and it has entered the standard undergraduate curriculum for  linear algebra.

What is still lost on most students and some faculty is the  remarkable  fact that the decomposition can be effectively performed without any knowledge of the eigenvalues of the matrix to be decomposed.  The  ground field  need not even be  algebraically closed  (like the complex numbers).  A  perfect field  is enough  (e,g., the rationals or  any  finite field).

That crucial observation is due to  Claude Chevalley (1951) which fully justifies the name of  Jordan-Chevalley decomposition,  which is now commonly used.  Here's the chronology of the four key publications:

Since the  Dunford & Schwartz  textbook (1971) was so instrumental in its popularizarion, the concept was known for a while as  Dunford's decomposition,  a term which is now being deprecated.  (Dunford's earliest research on the topic appeared 3 years after Chevalley's final results.)  As a clear example of reverse-chauvinism, the French have held on to the attribution to Dunford longer than anyone else...    ;)

Proof :

In a context where the matrix could be put in  Jordan's normal form,  the existence and uniqueness of the  Jordan-Chevalley decomposition  are fairly trivial.  However, as advertised, we aim for a stronger result and won't assume that the characteristic polynomial or the minimal polynomial can be fully factored.

We merely assume that the gound field is a  perfect field.  This is all the following algorithm need to uniquely determine the diagonalizable part D of the matrix M.  The nilpotent part N is then obtained as M-D. The ground field need not be algebraically closed  (as Chevalley's algorithm doesn't require that).  The last step will be to remark that  N  and  D  commute because they're both obtained as polynomial functions of  M QED

Chevalley's Algorithm :

Let  M  be the matrix to decompose.  Let  Q  be its  minimal polynomial.  The diagonalizable part  D  of  M  is a zero of  Q.

In elementary numerical analysis, the  Newton-Raphson method  can be used to find a numerical zero of a polynomial  Q  (or any other differentiable function)  as a limit of a sequence of approximations, defined by  recurrence:

xn+1     =     xn   -   Q ( xn )  /  Q' ( xn )

If the starting point is close enough to the root, the sequence so defined converges  quadratically  to that root  (i.e., the number of correct digits roughly doubles withh each iteration).

Chevalley's idea was to apply the same strategy and the same iteration formula to matrices instead of numbers.  The nice surprise is that the convergence is even  much  better, as the limit is  reached  in finitely many steps  (no more than the binary logarithm of the dimension of the matrix).  Let's show that:

 Come back later, we're 
 still working on this one...

Nilpotent matrix   |   Diagonalizable matrix   |   Jordan-Chevalley (Dunford) decomposition
 
Decomposition of Matrices in Semisimple and Nilpotent Parts  by  Miguel  (MathOverflow, 2012-09-29).
The Unapologetic Mathematician:  The Jordan-Chevalley decomposition & Proof  (2012-08-28).
The Jordan-Chevalley decomposition  by  Joo Heon Yoo  (2014-09-08).
 
Décomposition de Dunford (5:25, in French)  Matthieu Romagny  (Les 5 minutes Lebesgue, 2016-01-19).
 
Décomposition effective de Jordan-Chevalley et ses retombées en enseignement
by  Danielle Couty,  Jean Esterlé  & Rachid Zarouf  (arXiv, 2011-03-25 / 2013-01-15).


(2017-01-04)   Perron Vector of a Matrix with Positive Elements
The  spectral radius  r  of such a matrix is a  simple  positive eigenvalue.

The spectral radius of a square matrix is the largest modulus of its eigenvalues.  A simple  eigenvalue is a  simple  root of the characteristic polynomial.  It's associated with an eigenspace of dimension 1.

 Come back later, we're 
 still working on this one...

Cardinal voting   |   Perron-Frobenius theorem (Perron 1907,  Frobenius 1912)
Oskar Perron (1880-1975)   |   Georg Frobenius (1849-1917)


(2006-01-18)   Vandermonde Matrices
A Vandermonde matrix is also sometimes called an alternant matrix.

The Vandermonde matrix associated with a sequence (xn ) is the square matrix of the successive powers of the elements in that sequence.  The element on row i and column j  is  (x) i.  [Some authors consider the transpose of this.]

Mn+1   =     bracket
bracket
bracket
1111   ...   1 bracket
bracket
bracket
 x0    x1    x2    x3     ...    xn  
 x02   x12   x22   x32    ...    xn2 
 x03   x13   x23   x33    ...    xn3 
..................
 x0n   x1n   x2n   x3n    ...    xnn 

The determinant of the above Vandermonde matrix has a nice expression:

 n-1 n 
det ( Mn+1 )   =       Õ Õ   ( xj - xi )
 i = 0j = i+1 

Proof :   Both sides of the equation are polynomials of degree  n(n+1)/2  in n+1 variables, which vanish when two of the variables are equal.  Since such polynomials are necessarily proportional,  they must be equal if  some  pair of matching coefficients are equal.  That's easily proven for the  last  pair, by induction on  n.  (HINT:  The coefficient of the last variable raised to the n-th power is the Vandermonde determinant for the n previous variables.)  QED

Alexandre-Théophile Vandermonde  (1735-1796)

The above Vandermonde determinant and Vandermonde matrix are named after the French mathematician Alexandre-Théophile Vandermonde (1735-1796) who is the rightful founder of the modern theory of determinants...

 Cardano (1501-1576)
 Leibniz (1646-1716)
   Lagrange (1736-1813)
 Laplace (1749-1827)
 
Earlier authors had stumbled upon determinants when solving several simultaneous linear equations in as many unknowns.  Those include Cardano (in the 2 by 2 case only)  Leibniz, Cramer (1750) and Bézout (1764).
 
However, Vandermonde published the first investigation (1771) of determinants in their own right, as functions of the coefficients of a matrix.  The importance of the idea was immediately recognized by Laplace (1772) and Lagrange (1773).  Curiously, the determinant concept was made clear before the concept of a matrix itself.  The modern name "determinant" was coined by Gauss in 1801.  When it does not vanish, this quantity "determines" that a system of equations has a unique solution  (in which case it's called a "Cramer system", in high-school parlance).

Strangely enough, Vandermonde himself  never  mentioned the above type of matrices, now named after him.  For lack of a better explanation, it has been speculated  (by Lebesgue and others)  that whoever atttributed this enduring "credit" mistook one of Vandermonde's superscripts for an exponent !

What's perhaps the greatest contribution of Vandermonde is often overlooked:  Vandermonde came up with the idea of studying functions which are invariant under any permutation of a given polynomial's roots.  He did so in the first of only four mathematical papers he ever published:  Mémoire sur la résolution des équations (1770).  Arguably, this publication marks the very beginning of modern algebra.  This idea of Vandermonde's would ultimately lead to the celebrated Abel-Ruffini theorem (which states the impossibility of a general solution by radicals for algebraic equations of degree 5 or more).  The subsequent work of Lagrange, Ruffini, Abel and Galois begat Group Theory and Field Theory.


Xinglongdada (2012-05-04, e-mail)   A Totally Positive Matrix
If     0 < a1 < a2 < ... < an     and     0 ≤ l1 < l2 < ... < ln
prove that the determinant of the following matrix  An  is positive.

An   =     bracket
bracket
bracket
 a1l1   a2l1   a3l1    ...    anl1  bracket
bracket
bracket
 a1l2   a2l2   a3l2    ...    anl2 
...............
 a1ln   a2ln   a3ln    ...    anln 

Such a matrix is known as a  generalized Vandermonde matrix.

It's a classical example of a  totally positive matrix  (i.e., a real matrix whose  minors  are all positive)  which is to say that we always obtain a square matrix of positive determinant by deleting from it an equal number  (possibly zero)  of rows and columns.  (Note that, if a matrix is a generalized Vandermonde matrix, so are all its submatrices.)

The fact that all generalized Vandermonde determinants are positive happens to be a consequence of the fact that they are all nonzero...

Indeed, there's a continuous path  (depending on a parameter  u  that goes continuously from 0 to 1)  which goes continuously from the above determinant to an ordinary Vandermonde determinant  (which is clearly positive by virtue of the above formula which expresses it as a product of positive factors).  The determinant at point  u  along that path is obtained by replacing the exponent  li  with the positive quantity:

l'i   =   (1-u) li  +  u (i-1)

For any u in [0,1], the above inequalities between exponents are preserved.  A continuous function of u over [0,1] cannot go from a negative to a positive value without vanishing at some point.  So, to show that all generalized Vandermonde determinants are positive, we only need to establish that none of them is zero.  Let's prove that:

Proof :

We proceed by induction on the size  n  of the matrix  (n = 1  being trivial).

Let's assume  (by  induction hypothesis)  that there are no vanishing generalized Vandermonde determinants of size less than  n.  Then, if we had a vanishing generalized Vandermonde determinant of size n, there would be a linear combination of different powers of x vanishing for at least  n  positive values of x  (namely, a1 ...an).  That is to say that we'd have coefficients ci  (not all vanishing)  for which:

0   =   c1 xl1 + c2 xl2 + ... + cn xln     for at least n positive values of x.
So,   0   =   c1 + c2 xl2- l1 + ... + cn xln- l1     for those same values.

By Rolle's theorem, the derivative of the right-hand side would vanish at least once in each of the n-1 intervals separating the n known distinct roots.  So would that derivative multiplied into the positive quantity  x.  Therefore:

0   =   c2 (l2- l1) xl2- l1 + ... + cn (ln- l1) xln- l1   for at least n-1 values of x>0

This would contradict the  induction hypothesis.   QED

Wikipedia :   Totally Positive Matrix (Fekete's criterion)
Generalized Vandermonde Determinants   by  Hans Peter Schlickewei  &  Carlo Viola.
An Explicit Factorization of Totally Positive Generalized Vandermonde Matrices Avoiding Schur Functions
by Young-Ming Chen, Hsuan-Chu Li, Eng-Tjioe Tan. Applied Mathematics E-Notes, 8 (2008), pp. 138-147.

 James Joseph Sylvester
James J. Sylvester
 

(2020-09-13)  Signature of a Hermitian matrix   (1852)
This was originally called  Sylvester's law of inertia.


Sylvester's law of inertia (1852)   |   James Joseph Sylvester (1814-1897)
 
Computing the signature of an n×n symmetric matrix  by  Igor Rivin  (MathOverflow, 2012-09-19).
 
Theorem of the Day #142,  by  Robin Whitty.

 Andre-Louis Cholesky
André-Louis Cholesky
1875-1918   (X 1895)

(2012-10-07)  Cholesky reduction of a Hermitian matrix:
There's a lower-triangular matrix  L  such that   L L* = A

By definition, a  lower-triangular matrix  is a matrix with only zero elements above the diagonal.

This decomposition was introduced by the French engineer  André-Louis Cholesky  in the special case where the matrix  A  is a positive-definite symmetrical real matrix.  In that case, the matrix  L  has real coefficients and nonzero diagonal elements.

The matrix  L  isn't unique,  as   -L,  (L*)t  and  -(L*)t  are satisfactory too.

A related construction which applies to any  definite Hermitian matrix  (not necessarily positive)  involves a diagonal real matrix  D  and a lower-triangular matrix  L  whose diagonal elements are all equal to  1  (unity).

L D L*   =   A

Conversely, any matrix of this form is clearly Hermitian  (A* = A).

In the special case where  A  is real,  L  and  D  are real too.

The requirement of definiteness  (no zero eigenvalue)  can be dropped.  It's only useful to make the actual construction of  L  more straightforward.

Applications :

Because the decomposition can be performed quite efficiently, it can be used to solve a symmetric system of linear equations in real numbers much faster than with general-purpose Gaussian elimination  (such symmetrical systems arise in the linear least square method, in particular):

L L* x   =   b

The triangular systems  L y  =  b   and   L* x  =  y  are sequentially solved.

Cholesky's method was published posthumously in 1924.  It received little or no attention until World War II,  when Jack Todd taught it in his analysis course at King's College, London.   Alan M. Turing In 1948,  Alan Turing  proved the numerical stability of the procedure  (after empirical results published the same year by J.D. Fox, H.D. Huskey & J.H. Wilkinson).

Cholesky decomposition   |   HP Prime implementation   |   André-Louis Cholesky (1875-1918; X1895)
How accurate is Gaussian elimination?  by  Nicholas J. Higham  (July 1989)


(2006-01-18)   Toeplitz Matrices
A Toeplitz matrix is a matrix whose diagonals are constant.

In other words, the value of the element on the ith row and jth column of a  Toeplitz matrix  depends only on the difference  (j-i).

Mn   =     bracket
bracket
bracket
 x0    x1    x2    x3     ...    xn-1   bracket
bracket
bracket
 x-1    x0    x1    x2     ...    xn-2  
 x-2    x-1    x0    x1     ...    xn-3  
 x-3    x-2    x-1    x0     ...    xn-4  
..................
 x1-n    x2-n    x3-n    x4-n     ...    x0  

Such matrices have been named after  Otto Toeplitz  (1881-1940).

Wikipedia:   Toeplitz matrix   |   List of matrices
On Calculating the Determinants of Toeplitz Matrices  by  Hsuan-Chu Li   (2011)


(2006-12-25)   Circulant Matrices & Circulant Determinants
circulant matrix  is a nice special case of a Toeplitz matrix.

The value of the element on the ith row and jth column of an  n  by  n    circulant matrix  depends only on the difference  (j-i)  modulo n.

Mn   =     bracket
bracket
bracket
 x0    x1    x2    x3     ...    xn-1   bracket
bracket
bracket
 xn-1    x0    x1    x2     ...    xn-2  
 xn-2    xn-1    x0    x1     ...    xn-3  
 xn-3    xn-2    xn-1    x0     ...    xn-4  
..................
 x1    x2    x3    x4     ...    x0  

Each row is thus equal to the previous row shifted one place to the right.

The determinant and eigenvalues  ( lk )  of the above circulant matrix are:

 n-1     n-1 
det ( Mn )   =     Õ   lk lk   =     å   w jk  xj
 k = 0   j = 0 

In this,  w  =  exp ( 2ip/n )  is a  primitive  nth  root of unity.  For example:

determinant  a b 
 b a 
determinant     =   a 2 - b 2   =   ( a + b ) ( a - b )
 
determinant  a b c 
 c a b 
 b c a 
determinant     =   a 3 + b 3 + c -  3 abc
    =   ( a + b + c ) ( a + w b + w2 c ) ( a + w2 b + w c )
with   w  =  w3  =  ½ (-1 + i Ö3 )
 
determinant  a b c d 
 d a b c 
 c d a b 
 b c d a 
determinant     =   a 4 - b 4 + c 4 - d 4 - 2 a 2 c 2 + 2 b 2 d 2
  - 4 a 2 bd + 4 ab 2 c - 4 bc 2 d + 4 acd 2
    =   (a+b+c+d) (a+ib-c-id) (a-b+c-d) (a-ib-c+id)

The determinant of a circulant matrix is commonly called the  circulant determinant  or, for short, the  circulant  of the  n  coefficients involved.

The  order  of those coefficients matters, except when  n = 3.

Circulant matrix   |   Circulant graph


(2021-07-09)   Periodic sequence defined by a linear recurrence.
What linear recurrences of order k-1 yield sequences of period k?

Let's consider a sequence  u  obeying a linear recurrence of order  k-1 :

un+k-1   =   a1 un+k-2 + a2 un+k-3 + ... + ak-2 un+1 + ak-1 un

Well,  if  u  is periodic of period  k 

 Come back later, we're 
 still working on this one...

There are only four recurrences of order 3 yielding sequences of period 4:

un+3   =    -un+2 -un+1 -un
un+3= un+2 -un+1 +un
un+3= iun+2 +un+1 -iun
un+3= -un+2  + un+1  + i un

Dirichlet characters   |   Sárközy's condition (1978)   |   András Sárközy (1941-)


 Click Here 
 for Details (2006-12-27)   Wendt's Determinant   Wn   (1894)
Circulant of  n  binomial coefficients.  (A048954)

In his 1894 dissertation about  Fermat's Last TheoremErnst Adolf Wendt (1872-1946)  investigated the resultant  Wn  of  Xn-1  and  (X+1)n-1.

Emma Lehmer proved that  W vanishes if and only if   n  is a multiple of 6.

The  integer  Wn  boils down to the circulant of  n  binomial coefficients  (more precisely, a line of Pascal's triangle, without the rightmost "1").

The element at line i and column j is best described as equal to  C(n,|i-j|).  With the above notations, xj = C(n,j)  [where  j  goes from  0  to  n-1 ].  So:

 n-1 
Wn   =   det ( Mn )   =     Õ   [ ( 1 + wnk ) n - 1 ]
 k = 0 

nFactorization of  Wn
1 1
2 -1 . 3
3 22 . 7
4 -1 . 3 . 53
5 112 . 31
6 0
7 26 . 292 . 127
8 -1 . 37 . 53 . 173
9 22 . 7 . 194 . 372 . 73
10 -1 . 3 . 119 . 313
11 235 . 672 . 89 . 1992
12 0
13 36 . 532 . 792 . 1312 . 5212 . 8191
14 -1 . 224 . 3 . 296 . 433 . 1273
15 214 . 7 . 112 . 317 . 614 . 151 . 2712
16 -1 . 37 . 53 . 76 . 1715 . 2573
17 1032 . 1374 . 3072 . 4092 . 6132 . 35712 . 131071
18 0
19 1912 . 2294 . 4192 . 6472 . 7612 . 14832 . 93492 . 524287
20 -1 . 3 . 524 . 119 . 313 . 419 . 616
21 28 . 710 . 292 . 4310 . 127 . 2112 . 337 . 3792 . 4632 . 5472 . 22692
22 -1 . 3 . 2321 . 616 . 893 . 4310 . 1996 . 6833
23 4711 . 1394 . 4614 . 5994 . 11512 . 23472 . 33132 . 178481
24 0
See more details and a much larger table  elsewhere on this site.

If  p  divides  q ,  then  Wp  divides  Wq .  The nth Mersenne number  (2n-1)  divides  Wn  and the quotient  |Wn| / (2n-1)  is a perfect square...

On Wendt's Determinant and Sophie Germain's Theorem  by David Ford and Vijay Jha   (1993)
On Wendt's Determinant  by Charles Helou  (1997)
Upper Bounds for the Prime Divisors of Wendt's Determinant  by Anastasios Simalarides  (2000)


(2006-01-18)   Hankel Matrices & Hankel Transform
A Hankel matrix is a matrix whose skew-diagonals are constant.
(Such a matrix is also known as persymmetric or orthosymmetric.)

In other words, the value of the element on the ith row and jth column of a  persymmetric matrix  depends only on the sum of the indices  (i+j).

Mn   =     bracket
bracket
bracket
 x0    x1    x2    x3     ...    xn-1   bracket
bracket
bracket
 x1    x2    x3    x4     ...    xn  
 x2    x3    x4    x5     ...    xn+1  
 x3    x4    x5    x6     ...    xn+2  
..................
 xn-1    xn    xn+1    xn+2     ...    x2n-2  

With those notations, the  Hankel transform  of the sequence  xn  is defined to be the sequence  yn = det ( Mn+1 )  which starts with  y0 = x0.

Unfortunately, the above is not the only thing called  Hankel transform...


(2006-01-18)   Catberg Matrix 
The Hankel matrix of the reciprocals of Catalan numbers...
Its element on the ith row and jth column is  (i+j+1) / C(2i+2j , i+j).

 Come back later, we're 
 still working on this one...


(2007-02-16)   Sign Matrices, Phase Matrices
The elements of a phase matrix are complex numbers of module 1.

Thus, the elements of a phase matrix are complex numbers of the form:

e iq   =   cos q  +  i sin q         [ where q is real ]

If such a matrix is real, it's called a  sign matrix  (its elements are +1 or -1).


(2007-01-24)   Hadamard Matrices   ( real or complex coefficients )
An Hadamard matrix consists of  orthogonal  rows of  unit  elements.

Introduction  (real coefficients) :

In the realm of  real numbers,  Hadamard matrices are square matrices whose elements are either -1 or +1 and whose rows are pairwise orthogonal.

 Jacques Hadamard  
They're named after the Frenchman Jacques Hadamard (1865-1963) the very influential long-lived analyst who gave functional analysis its name (1910) and will always be remembered for his 1896 proof of the Prime Number Theorem, simultaneous with the independent proof from  Charles de la Vallée-Poussin  (1866-1962).
 
In 1893, Hadamard did write an important paper about those matrices, but they were actually first investigated in 1867, under the quaint name of  anallagmatic pavements,  by James Joseph Sylvester (1814-1897).

Such matrices are often represented graphically as square mosaics of square tiles that are either light or dark...  Here are special Hadamard matrices, known as  Walsh matrices,  whose orders are powers of 2  (1, 2, 4, 8, 16).

 Hadamard matrix of order 1  Hadamard matrix of order 2  Hadamard matrix of order 4  Hadamard matrix of order 8  Hadamard matrix of order 16

In the so-called  normalized  cases, one row and one column have a uniform color which is traditionally light  (so that there are  ½ n(n+1)  light squares and  ½ n(n-1)  dark ones).  The exact color-code can't be fixed  (as the opposite of an Hadamard matrix is also Hadamard)  but we usually assume that the dominant (light) color stands for the value +1.  When its first column and its first row have a uniform value of +1, the matrix is said to be  dephased  (a jargon term discussed below, originally applied to  complex  Hadamard matrices c. 1970).

The above serves to illustrate a remark of Sylvester (1867) that an Hadamard matrix of order  2n  may be obtained from a known Hadamard matrix  H  of order n, simply by juxtaposing four copies of H and negating  one  of those:

 H H
H-H 

More generally, an Hadamard matrix of order  mn  may be constructed from an Hadamard matrix K of order m and an Hadamard matrix H of order n by substituting in K all occurrences of +1 by H and all occurences of -1 by -H  (the above is the special case  m = 2).  In other words, the Kronecker product of two Hadamard matrices is also an Hadamard matrix...

Abstract Algebraic Definition   (real or complex realm) :

Hadamard matrices of order n may be defined as square matrices of order n whose elements have unit length  ( modulus )  and which verify the relation:

M M*   =   n In

In this,  M*  is the transpose of a real matrix  M  or, more generally, the transpose conjugate of a complex one.  Matrices with complex elements of unit moduli which satisfy the above relation are called complex Hadamard matrices.  They were introduced in 1970 by  Richard J. Turyn  (1930-?)  as interesting generalizations of the more traditional  (real)  Hadamard matrices,  which can be considered to be merely of a restricted subtype.

A (complex) Hadamard matrix is trivially obtained from another by using one or several of the following types of transformations:

  • Multiply a row or a column by an element of unit modulus (like -1).
  • Replace a row or a column by its (element-wise) conjugate.
  • Permutate rows or permutate columns.

Matrices so obtained from each other are said to be  Hadamard-equivalent.  A complex Hadamard matrix where all first-row and first-column elements are equal to 1 is said to be  dephased.  Any complex Hadamard matrix is equivalent to a dephased one  (HINT:  Use the first type of transformations.)

It's not difficult to prove that all  complex Hadamard matrices  of order 2 are Hadamard-equivalent to the aforementioned Walsh matrix of order 2.

 Walsh matrix of order 2     =     bracket
bracket
bracket
 1    1  bracket
bracket
bracket
1 -1

Similarly, there's essentially only one complex Hadamard matrix of order 3.  It involves a primitive cube root of 3,  denoted w :

bracket
bracket
bracket
  1     1     1   bracket
bracket
bracket
        where     w  =  w3  =  ½ (-1 + i Ö3 )
1 w w2
1 w2 w

That's an example of a so-called  Butson-Hadamard matrix, namely a complex Hadamard matrix of order  n  whose elements are  p-th roots of unity.  Those were first discussed in 1961 by Alton T. Butson (1926-1997before  Turyn's general approach to complex Hadamard matrices (1970).

For any order n, one example of a Butson-Hadamard matrix is given by the Vandermonde matrix of all the n-th roots of unity  (which is Hadamard-equivalent to their circulant matrix).  Another example is provided, for any  composite  order, by extending to complex Hadamard matrices what we've already remarked about real ones, namely that the Kronecker product of two (complex) Hadamard matrices is a (complex) Hadamard matrix.  In particular, if H is a complex Hadamard matrix of order m, a complex Hadamard matrix of order 3m is obtained from the following pattern:

bracket
bracket
bracket
H H H bracket
bracket
bracket
        where     w  =  w3  =  ½ (-1 + i Ö3 )
H w H w2 H
H w2 H w H

This can be applied repeatedly to obtain the following ternary equivalent of the aforementioned (binary) Walsh matrices, using a three-color code:

 Complex Hadamard matrix of order 1  Complex Hadamard matrix of order 3  Complex Hadamard matrix of order 9  Complex Hadamard matrix of order 27

For any  odd prime  p,  a complex Hadamard matrix whose elements are  pth  roots of unity must have an order which is a multiple of p.  Conversely, it is conjectured that such matrices exist for any order which is a multiple of p.

The existence of ordinary (real) Hadamard matrices  (as discussed next)  can be construed to be the case  p=2,  for which a slightly different rule holds...

Existence of (Real) Hadamard Matrices of Order n :

Back to  ordinary  Hadamard matrices (whose elements are either -1 or +1).

The  empty matrix  (of order 0)  is  vacuously  an Hadamard matrix.  The order of any Hadamard matrix must be 1, 2, or a multiple of 4  (Hadamard, 1893).  Conversely, the Hadamard-Paley conjecture (better known as the Hadamard Conjecture ) states that there are Hadamard matrices of all such orders...

In 1933, Raymond Paley used finite fields to construct Hadamard matrices of all orders of the form q+1  (resp. 2q+2)  where q is the power of a prime congruent to 3 (resp. 1) modulo 4.  Paley's Theorem  states that Hadamard matrices can be constructed (using Paley's construction and the aforementioned construction of Sylvester) for all positive orders divisible by  4  except  those in the following sequence  (i.e., multiples of  4  not equal to a power of  2  multiplied by q+1, for some power q of an odd prime):

92, 116, 156, 172, 184, 188, 232, 236, 260, 268, 292, 324, 356, 372, 376, 404, 412, 428, 436, 452, 472, 476, 508, 520, 532, 536, 584, 596, 604, 612, 652, 668, 712, 716, 732, 756, 764, 772, 808, 836, 852, 856, 872, 876, 892, 904, 932, 940, 944, 952, 956, 964, 980, 988, 996, 1004, 1012, 1016, 1028, 1036, 1068, 1072, 1076, 1100, 1108, 1132, 1148, 1168, 1180, 1192, 1196, 1208, 1212, 1220, 1244, 1268, 1276, 1300, 1316, 1336, 1340, 1364, 1372, 1380, 1388, 1396, 1412, 1432, 1436, 1444, 1464, 1476, 1492, 1508, 1528, 1556, 1564, 1588, 1604, 1612, 1616, 1636, 1652, 1672, 1676, 1692, 1704, 1712, 1732, 1740, 1744, 1752, 1772, 1780, 1796, 1804, 1808, 1820, 1828, 1836, 1844, 1852, 1864, 1888, 1892, 1900, 1904, 1912, 1916, 1928, 1940, 1948, 1960, 1964, 1972, 1976, 1992, 2008, 2024, 2032, 2036, 2052, 2056, 2060, 2072, 2076, 2092, 2108, 2116, 2136, 2148, 2152, 2156, 2164, 2172, 2200, 2212, 2216, 2228, 2264, 2276, 2284, 2292, 2296, 2300 ... (A046116)

For many of those remaining orders, Hadamard matrices have been discovered by other methods.  Here's a brief summary...

In 1962, Baumert, Golomb, and Hall found an Hadamard matrix of order 92, using the general approach introduced in 1944, by John Williamson (who so constructed an Hadamard matrix of order 172).  A Williamson matrix of order 4m is an Hadamard matrix obtained as follows from 4 matrices of order m  (with unit elements)  A, B, C and D, provided 4 matching relations are satisfied:

bracket
bracket
bracket
  A     B     C     D   bracket
bracket
bracket
  AA* + BB* + CC* + DD*   =   4 m  Im
AB* - BA*   =   CD* - DC*
AC* - CA*   =   DB* - BD*
AD* - DA*   =   BC* - CB*
-B A D -C
-C -D A B
-D C -B A

For example, we may build an Hadamard matrix of order 12 with 4 matrices of order 3, by letting one of them be the matrix U whose 9 elements are equal to +1  (the square of U is 3U)  while the 3 others are equal to  U-2I.

With  A = U,  the resulting Hadamard matrix of order 12  (which isn't normalized in the above sense)  is shown graphically below, next to the Hadamard matrices of order 24 and 48 obtained from it with the aforementioned Sylvester construct:

 Hadamard matrix of order 12  Hadamard matrix of order 24  Hadamard matrix of order 48

It turns out that all (real) Hadamard matrices of order 12 are equivalent.  In particular, the above order-12 Hadamard matrix is Hadamard-equivalent to the following, which is normalized  (dephased)  and symmetrical:

 Symmetrical Hadamard matrix of order 12

In 1985, K. Sawade found an Hadamard matrix of order 268.

In June 2004, Hadi Kharaghani and Behruz Tayfeh-Rezaie built the Hadamard matrix of order 428  shown below  (1 pixel per matrix element).

 Hadamard matrix of order 428

If we denote by M the symmetrical of any M with respect to its antidiagonal,  the above matrix is based on 4 matrices  A, B, C, D  of order  m = 107  put together in the following pattern, which gives an Hadamard matrix of order  4m  if and only if all the 10 conditions listed are met.  (Note that M N  =  N M .)

bracket
bracket
bracket
  A     B     C     D   bracket
bracket
bracket
  AA* + BB* + CC* + DD*   =   4 m  Im
B B*  =  B B*
C C*  =  C C*
D D*  =  D D*
-B A -D C
-C D A -B
-D -C B A
 
A B* - B A*   =   D C* - C D*
A C* - C A*   =   B D* - D B*
A D* - D A*   =   B C* - C B*
A B* - B A*   =   D C* - C D*
A C* - C A*   =   B D* - D B*
A D* - D A*   =   C B* - B C*
 

Since June 2004,  the order  668  has been the smallest multiple of 4 for which no Hadamard matrix is yet known  (as of July 2014).

Generalized Hadamard Matrices  by  A. T. Butson  (1961).
Recent advances in the construction of Hadamard matrices  by  Jennifer Seberry   (1973).
Walsh-Hadamard transform   |   Hadamard Matrix :   Wikipedia  |  MathWorld

Usage note :  Both spellings "a Hadamard matrix" and "an Hadamard matrix" can be construed as correct:  The former one applies to an anglicized pronounciation of "Hadamard" ("h" and final "d"  both  pronounced) whereas the latter must be used with the original French pronounciation  (initial "h" and final "d"  both  silent).


(2006-12-31)   Sylvester matrix of two polynomials
The resultant of two polynomials is the determinant of that matrix.

Consider two polynomials A and B of respective degrees m and n.

 m     n 
A   =     å   ai x i B   =     å   bj x j
 i = 0   j = 0 

Their  Sylvester matrix  is a square matrix (of dimension m+n) whose first n rows feature the m+1 coefficients of A and whose last m rows feature the n+1 coefficients of B.  It is defined as follows:

Mm+n   =     bracket
bracket
bracket
 am    am-1    am-2    am-3     ...    0    bracket
bracket
bracket
 0     am    am-1    am-2     ...    0   
 0     0     am    am-1     ...    0   
..................
 0     0     0    ...  a0    0   
 0     0     0    ...  a1    a0  
 bn    bn-1    bn-2    bn-3     ...    0   
..................
 0     0     0    ...  b1    b0  

The determinant of that matrix equals the so-called  resultant  of  A  and  B  which may be expressed as follows, in terms of all the complex roots  (ai , b)  of those polynomials and their leading coefficients  am and bn :

 m n 
r (A,B)   =   det ( Mm+n )   =     (am ) n (bn ) m   Õ Õ   ( ai - bj )
 i = ij = 1 

See also:  Bézout matrix  (Bézoutian}


(2006-12-31)   Discriminant of a polynomial   (Sylvester, 1851)
The resultant of a polynomial and its derivative.

I fondly remember learning this orally from  Claude Delagarde (1974).

The word "discriminant" was coined by James Joseph Sylvester (1814-1897) who found the correct expression for cubic polynomials (in 1851) and generalized it to  any  polynomial  (including the well-known quadratic case).

 Come back later, we're 
 still working on this one...

Discriminant  (Russian Wikipedia)


(2018-06-30)   Pfaffian   (Cayley, 1852)
The determinant of an  antisymmetric  matrix is the square of its  Pfaffian.

The  Pfaffian  is zero for odd orders.  For an antisymmetrical matrix of even order  2n,  the  Pfaffian  is an homogeneous polynomial of degree  n  in the coefficents of the matrix.

 Come back later, we're 
 still working on this one...

Pfaffian   |   Johann Friedrich Pfaff (1765-1825)   |   Arthur Cayley (1821-1895)


(2020-05-16)   Matrices over noncommutative rings
Including matrices over  skew fields  (e.g.,  quaternions).

 Come back later, we're 
 still working on this one...

Matrices over noncommutative rings  (Mathematics Stack Exchange, 2016-10-20).
 
Artin-Wedderburn classification theorem   |   Emil Artin (1899-1962)   |   Joseph Wedderburn (1882-1948)
 
Morita equivalence (category theory)   |   Kiiti Morita (1915-1995)


(2020-09-23)   Matrix Exponential
If  X  and  Y  commute,  then   exp (X)  exp (Y)  =  exp (X+Y)

 Come back later, we're 
 still working on this one...

Det ( exp(t A) )   =   exp( t Tr(A) )

Matrix exponential
 
How to raise e to the power of a matrix... and why (27:06)  by  Grant Sanderson  (2021-04-01).


(2021-07-05)   Matrix Gamma function
The reciprocal gamma function is an  entire  function:  g(X) = G(X)-1

 Come back later, we're 
 still working on this one...

Gamma function   |   Lanczos approximation
 
Computation of matrix gamma function  Joao R. Cardoso, Amir Sadeghi  (arXiv, 2018-06-27).
Generalized Extended Matrix Variate Beta and...  Nagar, Noguera, Gupta


(2021-07-16)   Matrix Mittag-Leffler functions and  fractional calculus.

 Come back later, we're 
 still working on this one...

Mittag-Leffler function   |   Gösta Mittag-Leffler (1846-1927).
 
Computing the matrix Mittag-Leffler function with applications to fractional calculus
by  Roberto Garrappa  and  Marina Popolizio  (arXiv, 2018-04-13).


(2021-06-06)   Analytic Functions of Nilpotent Matrices
They can be computed as  finite  sums.

One intriguing example,  due to Euler,  is the  factorial  of a particular matrix:

[ 0 1
0 0
] !    =   G ( [ 1 1
0 1
] )   =    [ 0 -g
0  0
]

Where  g  is the Euler-Mascheroni constant.

 Come back later, we're 
 still working on this one...

Computation of matrix gamma function  João R. Cardosoa  &  Amir Sadeghi   (Arxiv, 2018-06-27).
 
Divergent Series Redux
 
Euler's Secret (5:20)  by  Peyam R. Tabrizian  (Dr Peyam, 2021-05-12).

border
border
visits since December 16, 2006
 (c) Copyright 2000-2023, Gerard P. Michon, Ph.D.