Let \(Q\left( x \right) = 7x_1^2 + x_2^2 + 7x_3^2 - 8x_1^{}x_2^{} - 4x_1^{}x_3^{} - 8x_2^{}x_3^{}\). Find a unit vector \(x{\rm{ in }}{\mathbb{R}^3}\) at which \(Q\left( x \right)\) is maximized, subject to \({x^T}x = 1\).

[Hint: The eigenvalues of the matrix of the quadratic form \(Q\) are \(9,{\rm{ and }} - 3\).]

Short Answer

Expert verified

A unit vector \({\rm{x}}\) in \({\mathbb{R}^3}\) at which \(Q\left( {\rm{x}} \right)\) is maximized, subject to \({{\rm{x}}^T}{\rm{x}} = 1\) is \({\rm{u}} = \pm \left[ {\begin{array}{*{20}{c}}1\\2\\1\end{array}} \right]\).

Step by step solution

01

Find the eigenvalues 

As per the question, we have:

\(Q\left( {\rm{x}} \right) = 7x_1^2 + x_2^2 + 7x_3^2 - 8x_1^{}x_2^{} - 4x_1^{}x_3^{} - 8x_2^{}x_3^{}\)

And the eigenvalues as:

\(\begin{array}{l}{\lambda _1} = 9\\{\lambda _2} = - 3\end{array}\)

The maximum value of the given function subjected to constraints\({{\rm{x}}^T}{\rm{x}} = 1\)will be the greatest eigenvalue.

So, we have:

\({\lambda _1} = 9\)

02

Find the vector for this greatest eigenvalue 

Apply the theorem which states that the value of\({{\rm{x}}^T}A{\rm{x}}\) is maximum when \({\rm{x}}\) is a unit eigenvector \({{\rm{u}}_1}\) corresponding to the greatest eigenvalue \({\lambda _1}\).

The eigenvector corresponding to \({\lambda _1} = 9\)are a linear combination of\(\left[ {\begin{array}{*{20}{c}}{ - 1}\\0\\1\end{array}} \right]\)and \(\left[ {\begin{array}{*{20}{c}}{ - 2}\\1\\0\end{array}} \right]\)

The unit eigenvector that corresponds to the eigenvalue\({\lambda _1} = 9\)is:

\({\rm{u}} = \pm \left[ {\begin{array}{*{20}{c}}1\\2\\1\end{array}} \right]\).

Hence, aunit vector \({\rm{x}}\) in \({\mathbb{R}^3}\) at which \(Q\left( {\rm{x}} \right)\) is maximized, subject to \({{\rm{x}}^T}{\rm{x}} = 1\) is \({\rm{u}} = \pm \left[ {\begin{array}{*{20}{c}}1\\2\\1\end{array}} \right]\).

Unlock Step-by-Step Solutions & Ace Your Exams!

  • Full Textbook Solutions

    Get detailed explanations and key concepts

  • Unlimited Al creation

    Al flashcards, explanations, exams and more...

  • Ads-free access

    To over 500 millions flashcards

  • Money-back guarantee

    We refund you if you fail your exam.

Over 30 million students worldwide already upgrade their learning with Vaia!

One App. One Place for Learning.

All the tools & learning materials you need for study success - in one app.

Get started for free

Most popular questions from this chapter

In Exercises 25 and 26, mark each statement True or False. Justify each answer.

a. An\(n \times n\)matrix that is orthogonally diagonalizable must be symmetric.

b. If\({A^T} = A\)and if vectors\({\rm{u}}\)and\({\rm{v}}\)satisfy\(A{\rm{u}} = {\rm{3u}}\)and\(A{\rm{v}} = {\rm{3v}}\), then\({\rm{u}} \cdot {\rm{v}} = {\rm{0}}\).

c. An\(n \times n\)symmetric matrix has n distinct real eigenvalues.

d. For a nonzero \({\rm{v}}\) in \({\mathbb{R}^n}\) , the matrix \({\rm{v}}{{\rm{v}}^T}\) is called a projection matrix.

Question: [M] The covariance matrix below was obtained from a Landsat image of the Columbia River in Washington, using data from three spectral bands. Let \({x_1},{x_2},{x_3}\) denote the spectral components of each pixel in the image. Find a new variable of the form \({y_1} = {c_1}{x_1} + {c_2}{x_2} + {c_3}{x_3}\) that has maximum possible variance, subject to the constraint that \(c_1^2 + c_2^2 + c_3^2 = 1\). What percentage of the total variance in the data is explained by \({y_1}\)?

\[S = \left[ {\begin{array}{*{20}{c}}{29.64}&{18.38}&{5.00}\\{18.38}&{20.82}&{14.06}\\{5.00}&{14.06}&{29.21}\end{array}} \right]\]

Question: Let \({\bf{X}}\) denote a vector that varies over the columns of a \(p \times N\) matrix of observations, and let \(P\) be a \(p \times p\) orthogonal matrix. Show that the change of variable \({\bf{X}} = P{\bf{Y}}\) does not change the total variance of the data. (Hint: By Exercise 11, it suffices to show that \(tr\left( {{P^T}SP} \right) = tr\left( S \right)\). Use a property of the trace mentioned in Exercise 25 in Section 5.4.)

Question 11: Prove that any \(n \times n\) matrix A admits a polar decomposition of the form \(A = PQ\), where P is a \(n \times n\) positive semidefinite matrix with the same rank as A and where Q is an \(n \times n\) orthogonal matrix. (Hint: Use a singular value decomposition, \(A = U\sum {V^T}\), and observe that \(A = \left( {U\sum {U^T}} \right)\left( {U{V^T}} \right)\).) This decomposition is used, for instance, in mechanical engineering to model the deformation of a material. The matrix P describe the stretching or compression of the material (in the directions of the eigenvectors of P), and Q describes the rotation of the material in space.

Question: In Exercises 1 and 2, convert the matrix of observations to mean deviation form, and construct the sample covariance matrix.

\(1.\,\,\left( {\begin{array}{*{20}{c}}{19}&{22}&6&3&2&{20}\\{12}&6&9&{15}&{13}&5\end{array}} \right)\)

See all solutions

Recommended explanations on Math Textbooks

View all explanations

What do you think about this solution?

We value your feedback to improve our textbook solutions.

Study anywhere. Anytime. Across all devices.

Sign-up for free