In Exercises 1-6, the given set is a basis for a subspace W. Use the Gram-Schmidt process to produce an orthogonal basis for W.

6. \(\left( {\begin{aligned}{{}}3\\{ - 1}\\2\\{ - 1}\end{aligned}} \right),\left( {\begin{aligned}{{}}{ - 5}\\9\\{ - 9}\\3\end{aligned}} \right)\)

Short Answer

Expert verified

\(\left\{ {\left( {\begin{aligned}{{}}3\\{ - 1}\\2\\{ - 1}\end{aligned}} \right),\left( {\begin{aligned}{{}}4\\6\\{ - 3}\\0\end{aligned}} \right)} \right\}\) is an orthogonal basis for \(W\).

Step by step solution

01

The Gram-Schmidt process

With abasis\(\left\{ {{{\bf{x}}_1}, \ldots ,{{\bf{x}}_p}} \right\}\)for a nonzero subspace \(W\) of \({\mathbb{R}^n}\), the expressionis shown below:

\(\begin{aligned}{}{{\bf{v}}_1} &= {{\bf{x}}_1}\\{{\bf{v}}_2} & = {{\bf{x}}_2} - \frac{{{{\bf{x}}_2} \cdot {{\bf{v}}_1}}}{{{{\bf{v}}_1} \cdot {{\bf{v}}_1}}}{{\bf{v}}_2}\\ \vdots \\{{\bf{v}}_p} & = \frac{{{{\bf{x}}_p} \cdot {{\bf{v}}_1}}}{{{{\bf{v}}_1} \cdot {{\bf{v}}_1}}}{{\bf{v}}_p} - \frac{{{{\bf{x}}_p} \cdot {{\bf{v}}_2}}}{{{{\bf{v}}_2} \cdot {{\bf{v}}_2}}}{{\bf{v}}_p} - \ldots - \frac{{{{\bf{x}}_{p - 1}} \cdot {{\bf{v}}_{p - 1}}}}{{{{\bf{v}}_{p - 1}} \cdot {{\bf{v}}_{p - 1}}}}{{\bf{v}}_{p - 1}}\end{aligned}\)

Therefore, theorthogonal basisfor \(W\) is \(\left\{ {{{\bf{v}}_1}, \ldots ,{{\bf{v}}_p}} \right\}\). Furthermore,

\({\mathop{\rm Span}\nolimits} \left\{ {{{\bf{v}}_1}, \ldots ,{{\bf{v}}_k}} \right\} = {\mathop{\rm Span}\nolimits} \left\{ {{{\bf{x}}_1}, \ldots ,{{\bf{x}}_k}} \right\}\) for \(1 \le k \le p\).

02

Use a Gram-Schmidt process to produce an orthogonal basis for W

Let \({{\bf{x}}_1} = \left( {\begin{aligned}{{}}3\\{ - 1}\\2\\{ - 1}\end{aligned}} \right),{{\bf{x}}_2} = \left( {\begin{aligned}{{}}{ - 5}\\9\\{ - 9}\\3\end{aligned}} \right)\).

Use a Gram-Schmidt process and let \({{\bf{x}}_1} = {{\bf{v}}_1}\) to calculate \({{\bf{v}}_2}\) as shown below:

\(\begin{aligned}{}{{\bf{v}}_2} & = {{\bf{x}}_2} - \frac{{{{\bf{x}}_2} \cdot {{\bf{v}}_1}}}{{{{\bf{v}}_1} \cdot {{\bf{v}}_1}}}{{\bf{v}}_2}\\ & = {{\bf{x}}_2} - \frac{{ - 45}}{{15}}{{\bf{v}}_1}\\ &= {{\bf{x}}_2} - \left( { - 3} \right){{\bf{v}}_1}\\ & = \left( {\begin{aligned}{{}}{ - 5}\\9\\{ - 9}\\3\end{aligned}} \right) + 3\left( {\begin{aligned}{{}}3\\{ - 1}\\2\\{ - 1}\end{aligned}} \right)\\ & = \left( {\begin{aligned}{{}}{ - 5 + 9}\\{9 - 3}\\{ - 9 + 6}\\{3 + 3}\end{aligned}} \right)\\ & = \left( {\begin{aligned}{{}}4\\6\\{ - 3}\\0\end{aligned}} \right)\end{aligned}\)

Hence, an orthogonal basis for \(W\) is \(\left\{ {\left( {\begin{aligned}{{}}3\\{ - 1}\\2\\{ - 1}\end{aligned}} \right),\left( {\begin{aligned}{{}}4\\6\\{ - 3}\\0\end{aligned}} \right)} \right\}\).

Unlock Step-by-Step Solutions & Ace Your Exams!

  • Full Textbook Solutions

    Get detailed explanations and key concepts

  • Unlimited Al creation

    Al flashcards, explanations, exams and more...

  • Ads-free access

    To over 500 millions flashcards

  • Money-back guarantee

    We refund you if you fail your exam.

Over 30 million students worldwide already upgrade their learning with Vaia!

One App. One Place for Learning.

All the tools & learning materials you need for study success - in one app.

Get started for free

Most popular questions from this chapter

A simple curve that often makes a good model for the variable costs of a company, a function of the sales level \(x\), has the form \(y = {\beta _1}x + {\beta _2}{x^2} + {\beta _3}{x^3}\). There is no constant term because fixed costs are not included.

a. Give the design matrix and the parameter vector for the linear model that leads to a least-squares fit of the equation above, with data \(\left( {{x_1},{y_1}} \right), \ldots ,\left( {{x_n},{y_n}} \right)\).

b. Find the least-squares curve of the form above to fit the data \(\left( {4,1.58} \right),\left( {6,2.08} \right),\left( {8,2.5} \right),\left( {10,2.8} \right),\left( {12,3.1} \right),\left( {14,3.4} \right),\left( {16,3.8} \right)\) and \(\left( {18,4.32} \right)\), with values in thousands. If possible, produce a graph that shows the data points and the graph of the cubic approximation.

In Exercises 13 and 14, the columns of Q were obtained by applying the Gram-Schmidt process to the columns of A. Find an upper triangular matrix R such that \(A = QR\). Check your work.

13. \(A = \left( {\begin{aligned}{{}{}}5&9\\1&7\\{ - 3}&{ - 5}\\1&5\end{aligned}} \right),{\rm{ }}Q = \left( {\begin{aligned}{{}{}}{\frac{5}{6}}&{ - \frac{1}{6}}\\{\frac{1}{6}}&{\frac{5}{6}}\\{ - \frac{3}{6}}&{\frac{1}{6}}\\{\frac{1}{6}}&{\frac{3}{6}}\end{aligned}} \right)\)

Exercises 19 and 20 involve a design matrix \(X\) with two or more columns and a least-squares solution \(\hat \beta \) of \({\bf{y}} = X\beta \). Consider the following numbers.

(i) \({\left\| {X\hat \beta } \right\|^2}\)—the sum of the squares of the “regression term.” Denote this number by \(SS\left( R \right)\).

(ii) \({\left\| {{\bf{y}} - X\hat \beta } \right\|^2}\)—the sum of the squares for error term. Denote this number by \(SS\left( E \right)\).

(iii) \({\left\| {\bf{y}} \right\|^2}\)—the “total” sum of the squares of the -values. Denote this number by \(SS\left( T \right)\).

Every statistics text that discusses regression and the linear model \(y = X\beta + \in \) introduces these numbers, though terminology and notation vary somewhat. To simplify matters, assume that the mean of the -values is zero. In this case, \(SS\left( T \right)\) is proportional to what is called the variance of the set of \(y\)-values.

20. Show that \({\left\| {X\hat \beta } \right\|^2} = {\hat \beta ^T}{X^T}{\bf{y}}\). (Hint: Rewrite the left side and use the fact that \(\hat \beta \) satisfies the normal equations.) This formula for is used in statistics. From this and from Exercise 19, obtain the standard formula for \(SS\left( E \right)\):

\(SS\left( E \right) = {y^T}y - \hat \beta {X^T}y\)

In Exercises 1-6, the given set is a basis for a subspace W. Use the Gram-Schmidt process to produce an orthogonal basis for W.

2. \(\left( {\begin{aligned}{{}{}}0\\4\\2\end{aligned}} \right),\left( {\begin{aligned}{{}{}}5\\6\\{ - 7}\end{aligned}} \right)\)

Given data for a least-squares problem, \(\left( {{x_1},{y_1}} \right), \ldots ,\left( {{x_n},{y_n}} \right)\), the following abbreviations are helpful:

\(\begin{aligned}{l}\sum x = \sum\nolimits_{i = 1}^n {{x_i}} ,{\rm{ }}\sum {{x^2}} = \sum\nolimits_{i = 1}^n {x_i^2} ,\\\sum y = \sum\nolimits_{i = 1}^n {{y_i}} ,{\rm{ }}\sum {xy} = \sum\nolimits_{i = 1}^n {{x_i}{y_i}} \end{aligned}\)

The normal equations for a least-squares line \(y = {\hat \beta _0} + {\hat \beta _1}x\) may be written in the form

\(\begin{aligned}{c}{{\hat \beta }_0} + {{\hat \beta }_1}\sum x = \sum y \\{{\hat \beta }_0}\sum x + {{\hat \beta }_1}\sum {{x^2}} = \sum {xy} {\rm{ (7)}}\end{aligned}\)

Derive the normal equations (7) from the matrix form given in this section.

See all solutions

Recommended explanations on Math Textbooks

View all explanations

What do you think about this solution?

We value your feedback to improve our textbook solutions.

Study anywhere. Anytime. Across all devices.

Sign-up for free