Show That the Product of Two N X N Diagonal Matrices Is Again a Diagonal Matrix

Matrix whose but nonzero elements are on its primary diagonal

In linear algebra, a diagonal matrix is a matrix in which the entries outside the main diagonal are all zero; the term commonly refers to foursquare matrices. Elements of the main diagonal can either be zero or nonzero. An case of a ii×2 diagonal matrix is [ 3 0 0 two ] {\displaystyle \left[{\begin{smallmatrix}three&0\\0&2\end{smallmatrix}}\right]} , while an example of a iii×3 diagonal matrix is [ half-dozen 0 0 0 0 0 0 0 0 ] {\displaystyle \left[{\begin{smallmatrix}6&0&0\\0&0&0\\0&0&0\stop{smallmatrix}}\correct]} . An identity matrix of any size, or any multiple of it (a scalar matrix), is a diagonal matrix.

A diagonal matrix is sometimes called a scaling matrix, since matrix multiplication with information technology results in changing scale (size). Its determinant is the production of its diagonal values.

Definition [edit]

As stated higher up, a diagonal matrix is a matrix in which all off-diagonal entries are nix. That is, the matrix D = (d i,j ) with n columns and northward rows is diagonal if

i , j { 1 , ii , , n } , i j d i , j = 0. {\displaystyle \forall i,j\in \{one,2,\ldots ,n\},i\neq j\implies d_{i,j}=0.}

However, the primary diagonal entries are unrestricted.

The term diagonal matrix may sometimes refer to a rectangular diagonal matrix , which is an m-by-northward matrix with all the entries not of the grade d i,i beingness cipher. For example:

[ 1 0 0 0 4 0 0 0 3 0 0 0 ] {\displaystyle {\begin{bmatrix}ane&0&0\\0&four&0\\0&0&-iii\\0&0&0\\\end{bmatrix}}} or [ 1 0 0 0 0 0 4 0 0 0 0 0 3 0 0 ] {\displaystyle {\begin{bmatrix}ane&0&0&0&0\\0&iv&0&0&0\\0&0&-3&0&0\end{bmatrix}}}

More frequently, however, diagonal matrix refers to square matrices, which tin can be specified explicitly as a square diagonal matrix . A square diagonal matrix is a symmetric matrix, so this can also exist called a symmetric diagonal matrix .

The following matrix is foursquare diagonal matrix:

[ i 0 0 0 4 0 0 0 2 ] {\displaystyle {\brainstorm{bmatrix}1&0&0\\0&4&0\\0&0&-2\end{bmatrix}}}

If the entries are existent numbers or complex numbers, then it is a normal matrix besides.

In the rest of this commodity we volition consider but square diagonal matrices, and refer to them simply as "diagonal matrices".

Vector-to-matrix diag operator [edit]

A diagonal matrix D {\displaystyle D} can exist constructed from a vector a = [ a ane a north ] T {\displaystyle \mathbf {a} ={\begin{bmatrix}a_{1}&\dotsm &a_{n}\cease{bmatrix}}^{\textsf {T}}} using the diag {\displaystyle \operatorname {diag} } operator:

D = diag ( a 1 , , a due north ) {\displaystyle D=\operatorname {diag} (a_{1},\dots ,a_{n})}

This may exist written more compactly as D = diag ( a ) {\displaystyle D=\operatorname {diag} (\mathbf {a} )} .

The same operator is as well used to represent block diagonal matrices as A = diag ( A 1 , , A n ) {\displaystyle A=\operatorname {diag} (A_{1},\dots ,A_{north})} where each argument A i {\displaystyle A_{i}} is a matrix.

The diag {\displaystyle \operatorname {diag} } operator may exist written as:

diag ( a ) = ( a one T ) I {\displaystyle \operatorname {diag} (\mathbf {a} )=(\mathbf {a} \mathbf {ane} ^{T})\circ I}

where {\displaystyle \circ } represents the Hadamard product and 1 {\displaystyle \mathbf {ane} } is a abiding vector with elements 1.

Matrix-to-vector diag operator [edit]

The inverse matrix-to-vector diag {\displaystyle \operatorname {diag} } operator is sometimes denoted past the identically named diag ( D ) = [ a ane a n ] T {\displaystyle \operatorname {diag} (D)={\brainstorm{bmatrix}a_{i}&\dotsm &a_{n}\end{bmatrix}}^{\textsf {T}}} where the statement is now a matrix and the consequence is a vector of its diagonal entries.

The following property holds:

diag ( A B ) = j ( A B T ) i j {\displaystyle \operatorname {diag} (AB)=\sum _{j}(A\circ B^{T})_{ij}}

Scalar matrix [edit]

A diagonal matrix with equal diagonal entries is a scalar matrix; that is, a scalar multiple λ of the identity matrix I. Its outcome on a vector is scalar multiplication by λ. For example, a iii×3 scalar matrix has the form:

[ λ 0 0 0 λ 0 0 0 λ ] λ I three {\displaystyle {\begin{bmatrix}\lambda &0&0\\0&\lambda &0\\0&0&\lambda \terminate{bmatrix}}\equiv \lambda {\boldsymbol {I}}_{3}}

The scalar matrices are the center of the algebra of matrices: that is, they are precisely the matrices that commute with all other square matrices of the same size.[a] Past contrast, over a field (like the existent numbers), a diagonal matrix with all diagonal elements distinct only commutes with diagonal matrices (its centralizer is the ready of diagonal matrices). That is because if a diagonal matrix D = diag ( a 1 , , a northward ) {\displaystyle D=\operatorname {diag} (a_{i},\dots ,a_{n})} has a i a j , {\displaystyle a_{i}\neq a_{j},} and then given a matrix 1000 {\displaystyle G} with m i j 0 , {\displaystyle m_{ij}\neq 0,} the ( i , j ) {\displaystyle (i,j)} term of the products are: ( D M ) i j = a i m i j {\displaystyle (DM)_{ij}=a_{i}m_{ij}} and ( M D ) i j = m i j a j , {\displaystyle (MD)_{ij}=m_{ij}a_{j},} and a j m i j thou i j a i {\displaystyle a_{j}m_{ij}\neq m_{ij}a_{i}} (since one can separate by m i j {\displaystyle m_{ij}} ), so they do not commute unless the off-diagonal terms are zilch.[b] Diagonal matrices where the diagonal entries are not all equal or all distinct have centralizers intermediate between the whole space and only diagonal matrices.[1]

For an abstract vector space V (rather than the concrete vector space K northward {\displaystyle Yard^{n}} ), the analog of scalar matrices are scalar transformations. This is true more generally for a module One thousand over a band R, with the endomorphism algebra End(M) (algebra of linear operators on Thousand) replacing the algebra of matrices. Formally, scalar multiplication is a linear map, inducing a map R End ( M ) , {\displaystyle R\to \operatorname {End} (M),} (from a scalar λ to its corresponding scalar transformation, multiplication by λ) exhibiting End(M) equally a R-algebra. For vector spaces, the scalar transforms are exactly the center of the endomorphism algebra, and, similarly, invertible transforms are the centre of the general linear grouping GL(Five). The former is more generally truthful free modules M R n {\displaystyle 1000\cong R^{n}} , for which the endomorphism algebra is isomorphic to a matrix algebra.

Vector operations [edit]

Multiplying a vector past a diagonal matrix multiplies each of the terms by the corresponding diagonal entry. Given a diagonal matrix D = diag ( a ane , , a north ) {\displaystyle D=\operatorname {diag} (a_{ane},\dots ,a_{n})} and a vector v = [ x 1 10 n ] T {\displaystyle \mathbf {v} ={\begin{bmatrix}x_{1}&\dotsm &x_{n}\finish{bmatrix}}^{\textsf {T}}} , the product is:

D v = diag ( a i , , a northward ) [ x i 10 n ] = [ a 1 a n ] [ x 1 x n ] = [ a one ten one a n ten n ] . {\displaystyle D\mathbf {v} =\operatorname {diag} (a_{1},\dots ,a_{northward}){\begin{bmatrix}x_{one}\\\vdots \\x_{due north}\end{bmatrix}}={\brainstorm{bmatrix}a_{1}\\&\ddots \\&&a_{n}\end{bmatrix}}{\brainstorm{bmatrix}x_{1}\\\vdots \\x_{northward}\end{bmatrix}}={\brainstorm{bmatrix}a_{1}x_{one}\\\vdots \\a_{northward}x_{n}\terminate{bmatrix}}.}

This can be expressed more compactly by using a vector instead of a diagonal matrix, d = [ a i a n ] T {\displaystyle \mathbf {d} ={\begin{bmatrix}a_{1}&\dotsm &a_{n}\end{bmatrix}}^{\textsf {T}}} , and taking the Hadamard production of the vectors (entrywise product), denoted d 5 {\displaystyle \mathbf {d} \circ \mathbf {v} } :

D 5 = d v = [ a 1 a n ] [ 10 one 10 northward ] = [ a 1 x 1 a north x n ] . {\displaystyle D\mathbf {v} =\mathbf {d} \circ \mathbf {v} ={\begin{bmatrix}a_{one}\\\vdots \\a_{n}\end{bmatrix}}\circ {\begin{bmatrix}x_{ane}\\\vdots \\x_{due north}\cease{bmatrix}}={\begin{bmatrix}a_{ane}x_{i}\\\vdots \\a_{n}x_{n}\terminate{bmatrix}}.}

This is mathematically equivalent, just avoids storing all the nil terms of this sparse matrix. This product is thus used in motorcar learning, such every bit computing products of derivatives in backpropagation or multiplying IDF weights in TF-IDF,[two] since some BLAS frameworks, which multiply matrices efficiently, do non include Hadamard product capability directly.[3]

Matrix operations [edit]

The operations of matrix add-on and matrix multiplication are particularly uncomplicated for diagonal matrices. Write diag(a 1, ..., a northward ) for a diagonal matrix whose diagonal entries starting in the upper left corner are a 1, ..., a n . Then, for addition, nosotros have

diag(a 1, ..., a n ) + diag(b ane, ..., b north ) = diag(a i + b one, ..., a n + b northward )

and for matrix multiplication,

diag(a one, ..., a n ) diag(b ane, ..., b north ) = diag(a i b i, ..., a northward b northward ).

The diagonal matrix diag(a ane, ..., a n ) is invertible if and but if the entries a i, ..., a north are all nonzero. In this instance, we have

diag(a 1, ..., a northward )−ane = diag(a 1 −1, ..., a n −1).

In particular, the diagonal matrices form a subring of the ring of all n-by-n matrices.

Multiplying an n-by-n matrix A from the left with diag(a i, ..., a n ) amounts to multiplying the ith row of A by a i for all i; multiplying the matrix A from the right with diag(a 1, ..., a northward ) amounts to multiplying the ith column of A by a i for all i.

Operator matrix in eigenbasis [edit]

Every bit explained in determining coefficients of operator matrix, there is a special basis, due east 1, …, e n , for which the matrix A {\displaystyle A} takes the diagonal course. Hence, in the defining equation A e j = a i , j e i {\textstyle A\mathbf {e} _{j}=\sum a_{i,j}\mathbf {e} _{i}} , all coefficients a i , j {\displaystyle a_{i,j}} with ij are zero, leaving just ane term per sum. The surviving diagonal elements, a i , i {\displaystyle a_{i,i}} , are known equally eigenvalues and designated with λ i {\displaystyle \lambda _{i}} in the equation, which reduces to A east i = λ i eastward i {\displaystyle A\mathbf {e} _{i}=\lambda _{i}\mathbf {e} _{i}} . The resulting equation is known as eigenvalue equation [iv] and used to derive the characteristic polynomial and, further, eigenvalues and eigenvectors.

In other words, the eigenvalues of diag(λ i, …, λ n ) are λ i, …, λ due north with associated eigenvectors of due east i, …, e n .

Properties [edit]

  • The determinant of diag(a i, ..., a n ) is the production a 1a n .
  • The adjugate of a diagonal matrix is again diagonal.
  • Where all matrices are foursquare,
    • A matrix is diagonal if and merely if it is triangular and normal.
    • A matrix is diagonal if and simply if it is both upper- and lower-triangular.
    • A diagonal matrix is symmetric.
  • The identity matrix I n and naught matrix are diagonal.
  • A 1×ane matrix is always diagonal.

Applications [edit]

Diagonal matrices occur in many areas of linear algebra. Because of the simple description of the matrix operation and eigenvalues/eigenvectors given to a higher place, it is typically desirable to represent a given matrix or linear map by a diagonal matrix.

In fact, a given north-by-northward matrix A is like to a diagonal matrix (significant that there is a matrix X such that X −1 AX is diagonal) if and only if information technology has n linearly contained eigenvectors. Such matrices are said to exist diagonalizable.

Over the field of existent or complex numbers, more is true. The spectral theorem says that every normal matrix is unitarily like to a diagonal matrix (if AA = A A then there exists a unitary matrix U such that UAU is diagonal). Furthermore, the singular value decomposition implies that for any matrix A, there be unitary matrices U and V such that UAV is diagonal with positive entries.

Operator theory [edit]

In operator theory, especially the study of PDEs, operators are particularly easy to understand and PDEs easy to solve if the operator is diagonal with respect to the basis with which one is working; this corresponds to a separable partial differential equation. Therefore, a key technique to understanding operators is a modify of coordinates—in the language of operators, an integral transform—which changes the basis to an eigenbasis of eigenfunctions: which makes the equation separable. An important example of this is the Fourier transform, which diagonalizes constant coefficient differentiation operators (or more generally translation invariant operators), such as the Laplacian operator, say, in the estrus equation.

Especially like shooting fish in a barrel are multiplication operators, which are defined as multiplication by (the values of) a fixed role–the values of the function at each signal correspond to the diagonal entries of a matrix.

Encounter too [edit]

  • Anti-diagonal matrix
  • Banded matrix
  • Bidiagonal matrix
  • Diagonally ascendant matrix
  • Diagonalizable matrix
  • Jordan normal form
  • Multiplication operator
  • Tridiagonal matrix
  • Toeplitz matrix
  • Toral Lie algebra
  • Circulant matrix

Notes [edit]

References [edit]

  1. ^ "Practise Diagonal Matrices Always Commute?". Stack Exchange. March 15, 2016. Retrieved Baronial 4, 2018.
  2. ^ Sahami, Mehran (2009-06-15). Text Mining: Classification, Clustering, and Applications. CRC Press. p. xiv. ISBN9781420059458.
  3. ^ "Element-wise vector-vector multiplication in BLAS?". stackoverflow.com. 2011-10-01. Retrieved 2020-08-30 .
  4. ^ Nearing, James (2010). "Affiliate vii.9: Eigenvalues and Eigenvectors" (PDF). Mathematical Tools for Physics. ISBN978-0486482125 . Retrieved Jan 1, 2012.

Sources [edit]

  • Horn, Roger Alan; Johnson, Charles Royal (1985), Matrix Assay, Cambridge University Press, ISBN978-0-521-38632-half dozen

mejiatins1998.blogspot.com

Source: https://en.wikipedia.org/wiki/Diagonal_matrix

0 Response to "Show That the Product of Two N X N Diagonal Matrices Is Again a Diagonal Matrix"

إرسال تعليق

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel