involving the function and its derivatives. Here we look at some applications of the theorem for functions of one and two variables. a. Any continuous and differentiable function of a single variable, \(f(x),\) can be approximated near the point \(a\) by the formula \\[ f(x)=f(a)+f^{\prime}(a)(x-a)+0.5 f^{\prime \prime}(a)(x-a)^{2}+\text { terms in } f^{\prime \prime \prime}, f^{\prime \prime \prime \prime}, \ldots \\] Using only the first three of these terms results in a quadratic Taylor approximation. Use this approximation together with the definition of concavity given in Equation 2.85 to show that any concave function must lie on or below the tangent to the function at point \(a\) b. The quadratic Taylor approximation for any function of two variables, \(f(x, y),\) near the point \((a, b)\) is given by \\[ \begin{aligned} f(x, y)=& f(a, b)+f_{1}(a, b)(x-a)+f_{2}(a, b)(y-b) \\ &+0.5\left[f_{11}(a, b)(x-a)^{2}+2 f_{12}(a, b)(x-a)(y-b)+f_{22}(y-b)^{2}\right] \end{aligned} \\] Use this approximation to show that any concave function (as defined by Equation 2.98 ) must lie on or below its tangent plane at \((a, b)\).

Short Answer

Expert verified
Question: Prove that any concave function of one and two variables must lie on or below its tangent line (for one variable) or tangent plane (for two variables) at a given point using the quadratic Taylor approximation. Answer: By using the quadratic Taylor approximation and ignoring higher-order terms, we can compare the function value f(x) or f(x,y) to its respective tangent line g(x) or tangent plane h(x,y) equation. For concave functions in one variable, the second derivative must be less than or equal to zero, proving that f(x) must lie on or below g(x). In the case of concave functions of two variables, the Hessian matrix must be negative semi-definite, which implies that f(x, y) must lie on or below h(x, y).

Step by step solution

01

Write down the quadratic Taylor approximation for a single variable function.

Given the quadratic Taylor approximation for a single variable function f(x) near point a: \\[ f(x) = f(a) + f'(a)(x-a) + 0.5f''(a)(x-a)^2 + \text{ terms in }f'''(a), f''''(a), \ldots \\]
02

Ignore the higher order terms for simplicity.

We only need the first three terms of the Taylor approximation for this proof, so we can simplify it to: \\[ f(x) \approx f(a) + f'(a)(x-a) + 0.5f''(a)(x-a)^2 \\]
03

Define the tangent line at point a.

The tangent line at point a is given by the equation: \\[ g(x) = f(a) + f'(a)(x-a) \\]
04

Show that the function lies on or below the tangent line.

Since the function is concave, its second derivative must be less than or equal to zero (from the definition of concavity): \\[ f''(a) \leq 0 \\] Now, let's compare f(x) and g(x) from Step 2 and Step 3: \\[ f(x) \approx f(a) + f'(a)(x-a) + 0.5f''(a)(x-a)^2 \leq f(a) + f'(a)(x-a) = g(x) \\] Hence, any concave function must lie on or below the tangent to the function at point a. b. Proving concave functions of two variables lie on or below their tangent plane:
05

Write down the quadratic Taylor approximation for a two-variable function.

Given the quadratic Taylor approximation for a two-variable function f(x,y) near point (a, b): \\[ \begin{aligned} f(x, y) \approx & f(a, b) + f_{1}(a, b)(x-a) + f_{2}(a, b)(y-b) \\ & +0.5\left[f_{11}(a, b)(x-a)^{2} + 2 f_{12}(a, b)(x-a)(y-b) + f_{22}(a, b)(y-b)^{2}\right] \end{aligned} \\]
06

Define the tangent plane at point (a, b).

The tangent plane at point (a, b) is given by the equation: \\[ h(x, y) = f(a, b) + f_{1}(a, b)(x-a) + f_{2}(a, b)(y-b) \\]
07

Show that the function lies on or below the tangent plane.

A concave function of two variables has a negative semi-definite Hessian (from the definition of concavity): \\[ H(f) = \begin{bmatrix} f_{11}(a,b) & f_{12}(a,b) \\ f_{12}(a,b) & f_{22}(a,b) \end{bmatrix} \preceq 0 \\] Using the quadratic Taylor approximation and the Hessian matrix condition for concavity: \\[ \begin{aligned} f(x, y) \approx & f(a, b) + f_{1}(a, b)(x-a) + f_{2}(a, b)(y-b) \\ & +0.5\left[f_{11}(a, b)(x-a)^{2} + 2 f_{12}(a, b)(x-a)(y-b) + f_{22}(a, b)(y-b)^{2}\right] \\ & \leq f(a, b) + f_{1}(a, b)(x-a) + f_{2}(a, b)(y-b) \\ & = h(x, y) \end{aligned} \\] Hence, any concave function of two variables must lie on or below its tangent plane at point (a, b).

Unlock Step-by-Step Solutions & Ace Your Exams!

  • Full Textbook Solutions

    Get detailed explanations and key concepts

  • Unlimited Al creation

    Al flashcards, explanations, exams and more...

  • Ads-free access

    To over 500 millions flashcards

  • Money-back guarantee

    We refund you if you fail your exam.

Over 30 million students worldwide already upgrade their learning with Vaia!

One App. One Place for Learning.

All the tools & learning materials you need for study success - in one app.

Get started for free

Most popular questions from this chapter

Here are a few useful relationships related to the covariance of two random variables, \(x_{1}\) and \(x_{2}\) a show that \(\operatorname{Cov}\left(x_{1}, x_{2}\right)=E\left(x_{1} x_{2}\right)-E\left(x_{1}\right) E\left(x_{2}\right) .\) An important implication of this is that if \(\operatorname{Cov}\left(x_{1}, x_{2}\right)=0, E\left(x_{1} x_{2}\right)\) \(=E\left(x_{1}\right) E\left(x_{2}\right) .\) That is, the expected value of a product of two random variables is the product of these variables' expected values. b. Show that \(\operatorname{Var}\left(a x_{1}+b x_{2}\right)=a^{2} \operatorname{Var}\left(x_{1}\right)+b^{2} \operatorname{Var}\left(x_{2}\right)+2 a b \operatorname{Cov}\left(x_{1}, x_{2}\right)\) c. In Problem \(2.15 \mathrm{d}\) we looked at the variance of \(X=k x_{1}+(1-k) x_{2} \quad 0 \leq k \leq 1 .\) Is the conclusion that this variance is minimized for \(k=0.5\) changed by considering cases where \(\operatorname{Cov}\left(x_{1}, x_{2}\right) \neq 0 ?\) d. The correlation coefficient between two random variables is defined as \\[ \operatorname{Corr}\left(x_{1}, x_{2}\right)=\frac{\operatorname{Cov}\left(x_{1}, x_{2}\right)}{\sqrt{\operatorname{Var}\left(x_{1}\right) \operatorname{Var}\left(x_{2}\right)}} \\] Explain why \(-1 \leq \operatorname{Corr}\left(x_{1}, x_{2}\right) \leq 1\) and provide some intuition for this result. e. Suppose that the random variable \(y\) is related to the random variable \(x\) by the linear equation \(y=\alpha+\beta x\). Show that \\[ \beta=\frac{\operatorname{Cov}(y, x)}{\operatorname{Var}(x)} \\] Here \(\beta\) is sometimes called the (theoretical) regression coefficient of \(y\) on \(x\). With actual data, the sample analog of this expression is the ordinary least squares (OLS) regression coefficient.

The height of a ball that is thrown straight up with a certain force is a function of the time ( \(t\) ) from which it is released given by \(f(t)=-0.5 g t^{2}+40 t\) (where \(g\) is a constant determined by gravity). a. How does the value of \(t\) at which the height of the ball is at a maximum depend on the parameter \(g\) ? b. Use your answer to part (a) to describe how maximum height changes as the parameter \(g\) changes. c. Use the envelope theorem to answer part (b) directly. d. On the Earth \(g=32\), but this value varies somewhat around the globe. If two locations had gravitational constants that differed by \(0.1,\) what would be the difference in the maximum height of a ball tossed in the two places?

Because the expected value concept plays an important role in many economic theories, it may be useful to summarize a few more properties of this statistical measure. Throughout this problem, \(x\) is assumed to be a continuous random variable with PDF \(f(x)\). a. (Jensen's inequality) Suppose that \(g(x)\) is a concave function. Show that \(E[g(x)] \leq g[E(x)] .\) Hint: Construct the tangent to \(g(x)\) at the point \(E(x)\). This tangent will have the form \(c+d x \geq g(x)\) for all values of \(x\) and \(c+d E(x)=g[E(x)]\) where \(c\) and \(d\) are constants. b. Use the procedure from part (a) to show that if \(g(x)\) is a convex function then \(E[g(x)] \geq g[E(x)]\) c. Suppose \(x\) takes on only non-negative values-that is, \(0 \leq x \leq \infty\). Use integration by parts to show that \\[ E(x)=\int_{0}^{\infty}[1-F(x)] d x \\] where \(\left.F(x) \text { is the cumulative distribution function for } x \text { [that is, } F(x)=\int_{0}^{x} f(t) d t\right]\) d. (Markov's inequality) Show that if \(x\) takes on only positive values then the following inequality holds: \\[ \begin{array}{r} P(x \geq t) \leq \frac{E(x)}{t} \\ \text {Hint: } E(x)=\int_{0}^{\infty} x f(x) d x=\int_{0}^{t} x f(x) d x+\int_{t}^{\infty} x f(x) d x \end{array} \\] e. Consider the PDF \(f(x)=2 x^{-3}\) for \(x \geq 1\) 1\. Show that this is a proper PDF. 2\. Calculate \(F(x)\) for this PDF. 3\. Use the results of part (c) to calculate \(E(x)\) for this PDF. 4\. Show that Markov's inequality holds for this function. f. The concept of conditional expected value is useful in some economic problems. We denote the expected value of \(x\) conditional on the occurrence of some event, \(A,\) as \(E(x | A) .\) To compute this value we need to know the PDF for \(x\) given that \(A\) has occurred [denoted by \(f(x | A)]\). With this notation, \(E(x | A)=\int_{-\infty}^{+\infty} x f(x | A) d x\). Perhaps the easiest way to understand these relationships is with an example. Let $$f(x)=\frac{x^{2}}{3} \quad \text { for } \quad-1 \leq x \leq 2$$ 1\. Show that this is a proper PDF. 2\. Calculate \(E(x)\) 3\. Calculate the probability that \(-1 \leq x \leq 0\) 4\. Consider the event \(0 \leq x \leq 2,\) and call this event \(A\). What is \(f(x | A) ?\) 5\. Calculate \(E(x | A)\) 6\. Explain your results intuitively.

Show that if \(f\left(x_{1}, x_{2}\right)\) is a concave function then it is also a quasi-concave function. Do this by comparing Equation 2.114 (defining quasi-concavity) with Equation 2.98 (defining concavity). Can you give an intuitive reason for this result? Is the converse of the statement true? Are quasi-concave functions necessarily concave? If not, give a counterexample.

Because we use the envelope theorem in constrained optimization problems often in the text, proving this theorem in a simple case may help develop some intuition. Thus, suppose we wish to maximize a function of two variables and that the value of this function also depends on a parameter, \(a: f\left(x_{1}, x_{2}, a\right) .\) This maximization problem is subject to a constraint that can be written as: \(g\left(x_{1}, x_{2}, a\right)=0\) a. Write out the Lagrangian expression and the first-order conditions for this problem. b. Sum the two first-order conditions involving the \(x^{\prime}\) s. c. Now differentiate the above sum with respect to \(a\) - this shows how the \(x\) 's must change as \(a\) changes while requiring that the first-order conditions continue to hold. A. As we showed in the chapter, both the objective function and the constraint in this problem can be stated as functions of \(a: f\left(x_{1}(a), x_{2}(a), a\right), g\left(x_{1}(a), x_{2}(a), a\right)=0 .\) Differentiate the first of these with respect to \(a\). This shows how the value of the objective changes as \(a\) changes while keeping the \(x^{\prime}\) s at their optimal values. You should have terms that involve the \(x^{\prime}\) s and a single term in \(\partial f / \partial a\) e. Now differentiate the constraint as formulated in part (d) with respect to \(a\). You should have terms in the \(x\) 's and a single term in \(\partial g / \partial a\) f. Multiply the results from part (e) by \(\lambda\) (the Lagrange multiplier), and use this together with the first-order conditions from part (c) to substitute into the derivative from part (d). You should be able to show that \\[ \frac{d f\left(x_{1}(a), x_{2}(a), a\right)}{d a}=\frac{\partial f}{\partial a}+\lambda \frac{\partial g}{\partial a} \\] which is just the partial derivative of the Lagrangian expression when all the \(x^{\prime}\) 's are at their optimal values. This proves the envelope theorem. Explain intuitively how the various parts of this proof impose the condition that the \(x\) 's are constantly being adjusted to be at their optimal values. g. Return to Example 2.8 and explain how the envelope theorem can be applied to changes in the fence perimeter \(P\) -that is, how do changes in \(P\) affect the size of the area that can be fenced? Show that in this case the envelope theorem illustrates how the Lagrange multiplier puts a value on the constraint.

See all solutions

What do you think about this solution?

We value your feedback to improve our textbook solutions.

Study anywhere. Anytime. Across all devices.

Sign-up for free