Because we use the envelope theorem in constrained optimization problems often in the text, proving this theorem in a simple case may help develop some intuition. Thus, suppose we wish to maximize a function of two variables and that the value of this function also depends on a parameter, \(a: f\left(x_{1}, x_{2}, a\right) .\) This maximization problem is subject to a constraint that can be written as: \(g\left(x_{1}, x_{2}, a\right)=0\) a. Write out the Lagrangian expression and the first-order conditions for this problem. b. Sum the two first-order conditions involving the \(x^{\prime}\) s. c. Now differentiate the above sum with respect to \(a\) - this shows how the \(x\) 's must change as \(a\) changes while requiring that the first-order conditions continue to hold. A. As we showed in the chapter, both the objective function and the constraint in this problem can be stated as functions of \(a: f\left(x_{1}(a), x_{2}(a), a\right), g\left(x_{1}(a), x_{2}(a), a\right)=0 .\) Differentiate the first of these with respect to \(a\). This shows how the value of the objective changes as \(a\) changes while keeping the \(x^{\prime}\) s at their optimal values. You should have terms that involve the \(x^{\prime}\) s and a single term in \(\partial f / \partial a\) e. Now differentiate the constraint as formulated in part (d) with respect to \(a\). You should have terms in the \(x\) 's and a single term in \(\partial g / \partial a\) f. Multiply the results from part (e) by \(\lambda\) (the Lagrange multiplier), and use this together with the first-order conditions from part (c) to substitute into the derivative from part (d). You should be able to show that \\[ \frac{d f\left(x_{1}(a), x_{2}(a), a\right)}{d a}=\frac{\partial f}{\partial a}+\lambda \frac{\partial g}{\partial a} \\] which is just the partial derivative of the Lagrangian expression when all the \(x^{\prime}\) 's are at their optimal values. This proves the envelope theorem. Explain intuitively how the various parts of this proof impose the condition that the \(x\) 's are constantly being adjusted to be at their optimal values. g. Return to Example 2.8 and explain how the envelope theorem can be applied to changes in the fence perimeter \(P\) -that is, how do changes in \(P\) affect the size of the area that can be fenced? Show that in this case the envelope theorem illustrates how the Lagrange multiplier puts a value on the constraint.

Short Answer

Expert verified
The envelope theorem provides insight into the relationship between the change in the parameter and the optimization problem by showing that as the parameter changes, the optimal values of the variables adjust in a way that keeps the partial derivative of the Lagrangian expression constant. This result helps to understand the impact of the parameter on the objective function without explicitly re-solving the entire optimization problem for each change in the parameter.

Step by step solution

01

Part a - Writing the Lagrangian and First-order Conditions

The given maximization problem is: \\[ \text{Maximize} \hspace{0.1in} f(x_1, x_2, a) \hspace{0.1in}\text{subject to} \hspace{0.1in} g(x_1, x_2, a) = 0. \\] We can solve it by using the Lagrangian with a single constraint. The Lagrangian is \\[ \mathcal{L}(x_1,x_2,\lambda, a) = f(x_1, x_2, a) + \lambda (g(x_1, x_2, a)), \\] To find the first-order conditions, take the partial derivatives of the Lagrangian with respect to \(x_1\), \(x_2\), and \(\lambda\) and set them equal to zero: 1. \(\frac{\partial \mathcal{L}}{\partial x_1} = \frac{\partial f}{\partial x_1} + \lambda \frac{\partial g}{\partial x_1} = 0\) 2. \(\frac{\partial \mathcal{L}}{\partial x_2} = \frac{\partial f}{\partial x_2} + \lambda \frac{\partial g}{\partial x_2} = 0\) 3. \(\frac{\partial \mathcal{L}}{\partial \lambda} = g(x_1, x_2, a) = 0\)
02

Part b - Summing First-order Conditions

To sum the two first-order conditions involving the variables \(x_1\) and \(x_2\), simply add the first two equations: 1+2. \(\frac{\partial f}{\partial x_1} + \frac{\partial f}{\partial x_2}+\lambda \left(\frac{\partial g}{\partial x_1}+\frac{\partial g}{\partial x_2} \right)=0\)
03

Part c - Differentiating the Sum of First-order Conditions

Now differentiate the sum of the first-order conditions with respect to \(a\): \(\frac{d}{da}\left[\frac{\partial f}{\partial x_1} + \frac{\partial f}{\partial x_2}+\lambda \left(\frac{\partial g}{\partial x_1}+\frac{\partial g}{\partial x_2} \right)\right]=0\)
04

Part d - Differentiating the Objective Function

Differentiate the objective function \(f\left(x_{1}(a), x_{2}(a), a\right)\) with respect to \(a\): \(\frac{d f\left(x_{1}(a), x_{2}(a), a\right)}{d a} = \frac{\partial f}{\partial a}+\frac{\partial f}{\partial x_1}\frac{dx_1}{da}+\frac{\partial f}{\partial x_2}\frac{dx_2}{da}\)
05

Part e - Differentiating the Constraint

The constraint is \( g\left(x_{1}(a), x_{2}(a), a\right)=0 \). Differentiate it with respect to \(a\): \(\frac{d g\left(x_{1}(a), x_{2}(a), a\right)}{d a} = \frac{\partial g}{\partial a}+\frac{\partial g}{\partial x_1}\frac{dx_1}{da}+\frac{\partial g}{\partial x_2}\frac{dx_2}{da}\)
06

Part f - Substitution and the Envelope Theorem

Multiply the result from part (e) by \(\lambda\): \(\lambda\frac{d g\left(x_{1}(a), x_{2}(a), a\right)}{d a} = \lambda\left(\frac{\partial g}{\partial a}+\frac{\partial g}{\partial x_1}\frac{dx_1}{da}+\frac{\partial g}{\partial x_2}\frac{dx_2}{da}\right)\) Now substitute the first-order conditions from part (c) into the derivative from part (d): \(\frac{d f\left(x_{1}(a), x_{2}(a), a\right)}{d a} = \frac{\partial f}{\partial a} + \lambda \frac{\partial g}{\partial a}\) This proves the envelope theorem. Intuitively, this result tells us that as the parameter \(a\) changes, the \(x_i(a)\) values are adjusted optimally so that the partial derivative of the Lagrangian expression remains constant.
07

Part g - Applying the Envelope Theorem in Example 2.8

In Example 2.8, we are maximizing the area of a rectangular fence (\(A = x_1x_2\)) subject to a constraint on the fence perimeter (\(P = 2(x_1+x_2)\)). Applying the envelope theorem, we want to see how changes in the perimeter \(P\) affect the area that can be fenced. Since we have found that \(\frac{d A\left(x_{1}(P), x_{2}(P), P\right)}{d P}=\frac{\partial A}{\partial P}+\lambda \frac{\partial g}{\partial P}\), the change in the area due to a change in the fence perimeter is equal to the sum of the partial derivative of the area with respect to the perimeter and the product of the Lagrange multiplier and the partial derivative of the constraint with respect to the perimeter. The Lagrange multiplier, in this case, puts a value on how restrictive the constraint is, and it shows how the optimal area changes in response to a change in the available perimeter.

Unlock Step-by-Step Solutions & Ace Your Exams!

  • Full Textbook Solutions

    Get detailed explanations and key concepts

  • Unlimited Al creation

    Al flashcards, explanations, exams and more...

  • Ads-free access

    To over 500 millions flashcards

  • Money-back guarantee

    We refund you if you fail your exam.

Over 30 million students worldwide already upgrade their learning with Vaia!

One App. One Place for Learning.

All the tools & learning materials you need for study success - in one app.

Get started for free

Most popular questions from this chapter

Another function we will encounter often in this book is the power function: \\[ y=x^{\delta} \\] the form \(y=x^{8} / 8\) to ensure that the derivatives have the proper sign). a. Show that this function is concave (and therefore also, by the result of Problem \(2.9,\) quasi-concave). Notice that the \(\delta=1\) is a special case and that the function is "strictly" concave only for \(\delta<1\) b. Show that the multivariate form of the power function \\[ y=f\left(x_{1}, x_{2}\right)=\left(x_{1}\right)^{8}+\left(x_{2}\right)^{8} \\] is also concave (and quasi-concave). Explain why, in this case, the fact that \(f_{12}=f_{21}=0\) makes the determination of concavity especially simple. c. One way to incorporate "scale" effects into the function described in part (b) is to use the monotonic transformation \\[ g\left(x_{1}, x_{2}\right)=y^{y}=\left[\left(x_{1}\right)^{8}+\left(x_{2}\right)^{8}\right]^{y} \\] where \(\gamma\) is a positive constant. Does this transformation preserve the concavity of the function? Is \(g\) quasi-concave?

Because the expected value concept plays an important role in many economic theories, it may be useful to summarize a few more properties of this statistical measure. Throughout this problem, \(x\) is assumed to be a continuous random variable with PDF \(f(x)\). a. (Jensen's inequality) Suppose that \(g(x)\) is a concave function. Show that \(E[g(x)] \leq g[E(x)] .\) Hint: Construct the tangent to \(g(x)\) at the point \(E(x)\). This tangent will have the form \(c+d x \geq g(x)\) for all values of \(x\) and \(c+d E(x)=g[E(x)]\) where \(c\) and \(d\) are constants. b. Use the procedure from part (a) to show that if \(g(x)\) is a convex function then \(E[g(x)] \geq g[E(x)]\) c. Suppose \(x\) takes on only non-negative values-that is, \(0 \leq x \leq \infty\). Use integration by parts to show that \\[ E(x)=\int_{0}^{\infty}[1-F(x)] d x \\] where \(\left.F(x) \text { is the cumulative distribution function for } x \text { [that is, } F(x)=\int_{0}^{x} f(t) d t\right]\) d. (Markov's inequality) Show that if \(x\) takes on only positive values then the following inequality holds: \\[ \begin{array}{r} P(x \geq t) \leq \frac{E(x)}{t} \\ \text {Hint: } E(x)=\int_{0}^{\infty} x f(x) d x=\int_{0}^{t} x f(x) d x+\int_{t}^{\infty} x f(x) d x \end{array} \\] e. Consider the PDF \(f(x)=2 x^{-3}\) for \(x \geq 1\) 1\. Show that this is a proper PDF. 2\. Calculate \(F(x)\) for this PDF. 3\. Use the results of part (c) to calculate \(E(x)\) for this PDF. 4\. Show that Markov's inequality holds for this function. f. The concept of conditional expected value is useful in some economic problems. We denote the expected value of \(x\) conditional on the occurrence of some event, \(A,\) as \(E(x | A) .\) To compute this value we need to know the PDF for \(x\) given that \(A\) has occurred [denoted by \(f(x | A)]\). With this notation, \(E(x | A)=\int_{-\infty}^{+\infty} x f(x | A) d x\). Perhaps the easiest way to understand these relationships is with an example. Let $$f(x)=\frac{x^{2}}{3} \quad \text { for } \quad-1 \leq x \leq 2$$ 1\. Show that this is a proper PDF. 2\. Calculate \(E(x)\) 3\. Calculate the probability that \(-1 \leq x \leq 0\) 4\. Consider the event \(0 \leq x \leq 2,\) and call this event \(A\). What is \(f(x | A) ?\) 5\. Calculate \(E(x | A)\) 6\. Explain your results intuitively.

Consider the following constrained maximization problem: \\[ \begin{array}{ll} \text { maximize } & y=x_{1}+5 \ln x_{2} \\ \text { subject to } & k-x_{1}-x_{2}=0 \end{array} \\] where \(k\) is a constant that can be assigned any specific value. a. Show that if \(k=10\), this problem can be solved as one involving only equality constraints. b. Show that solving this problem for \(k=4\) requires that \(x_{1}=-1\) c. If the \(x^{\prime}\) s in this problem must be non-negative, what is the optimal solution when \(k=4 ?\) (This problem may be solved either intuitively or using the methods outlined in the chapter.) d. What is the solution for this problem when \(k=20 ?\) What do you conclude by comparing this solution with the solution for part (a)? Note: This problem involves what is called a quasi-linear function. Such functions provide important examples of some types of behavior in consumer theory-as we shall see.

Suppose that \(f(x, y)=x y .\) Find the maximum value for \(f\) if \(x\) and \(y\) are constrained to sum to \(1 .\) Solve this problem in two ways: by substitution and by using the Lagrange multiplier method.

One of the most important functions we will encounter in this book is the Cobb-Douglas function: \\[ y=\left(x_{1}\right)^{\alpha}\left(x_{2}\right)^{\beta} \\] where \(\alpha\) and \(\beta\) are positive constants that are each less than 1 a. Show that this function is quasi-concave using a "brute force" method by applying Equation 2.114 b. Show that the Cobb-Douglas function is quasi-concave by showing that any contour line of the form \(y=c\) (where \(c\) is any positive constant is convex and therefore that the set of points for which \(y>c\) is a convex set. c. Show that if \(\alpha+\beta>1\) then the Cobb-Douglas function is not concave (thereby illustrating again that not all quasiconcave functions are concave). Note: The Cobb-Douglas function is discussed further in the Extensions to this chapter.

See all solutions

What do you think about this solution?

We value your feedback to improve our textbook solutions.

Study anywhere. Anytime. Across all devices.

Sign-up for free