Another function we will encounter often in this book is the power function: \\[ y=x^{\delta} \\] the form \(y=x^{8} / 8\) to ensure that the derivatives have the proper sign). a. Show that this function is concave (and therefore also, by the result of Problem \(2.9,\) quasi-concave). Notice that the \(\delta=1\) is a special case and that the function is "strictly" concave only for \(\delta<1\) b. Show that the multivariate form of the power function \\[ y=f\left(x_{1}, x_{2}\right)=\left(x_{1}\right)^{8}+\left(x_{2}\right)^{8} \\] is also concave (and quasi-concave). Explain why, in this case, the fact that \(f_{12}=f_{21}=0\) makes the determination of concavity especially simple. c. One way to incorporate "scale" effects into the function described in part (b) is to use the monotonic transformation \\[ g\left(x_{1}, x_{2}\right)=y^{y}=\left[\left(x_{1}\right)^{8}+\left(x_{2}\right)^{8}\right]^{y} \\] where \(\gamma\) is a positive constant. Does this transformation preserve the concavity of the function? Is \(g\) quasi-concave?

Short Answer

Expert verified
Analyzing the concavity and quasi-concavity of various power functions, we found that: 1. The univariate power function \(y = x^{\delta}\) is concave if \(\delta \le 1\). Since concave functions are also quasi-concave, this result holds for quasi-concavity as well. 2. The multivariate power function \(y=f(x_{1}, x_{2})=(x_{1})^{8}+(x_{2})^{8}\) is concave and thus quasi-concave as well. 3. A monotonic transformation \(g(x_{1}, x_{2}) = [\left(x_{1}\right)^{8}+\left(x_{2}\right)^{8}]^{\gamma}\) preserves concavity only if \(1 \le \gamma\). However, quasi-concavity may still hold regardless of the \(\gamma\) value.

Step by step solution

01

Univariate Power Function Concavity

To demonstrate the concavity of the univariate power function \(y = x^{\delta}\), we need to find the second derivative and check if it is nonpositive. Let's compute the derivatives: 1. First derivative: \\[ \frac{dy}{dx} = \delta x^{\delta - 1} \\] 2. Second derivative: \\[ \frac{d^2 y}{dx^2} = \delta(\delta - 1) x^{\delta - 2} \\] For \(y = x^{\delta}\) to be concave, the second derivative must be nonpositive. Given that \(x\) is positive, we can analyze the sign of the constant term, \(\delta(\delta - 1)\): - For \(\delta < 1\), the second derivative is negative, implying strict concavity. - For \(\delta = 1\), it is a special case where the second derivative is zero, which means the function is linear but not strictly concave. However, it is still concave since nonpositive second derivative still holds. Since the function is concave, it is also quasi-concave by the result of Problem 2.9.
02

Multivariate Power Function Concavity

To show that the multivariate power function \(y=f(x_{1}, x_{2})=(x_{1})^{8}+(x_{2})^{8}\) is concave, we will find its Hessian matrix and check if it is negative semidefinite. The Hessian matrix is given by: \\[ H = \begin{bmatrix} \frac{\partial^2 y}{\partial x_1^2} & \frac{\partial^2 y}{\partial x_1 \partial x_2} \\ \frac{\partial^2 y}{\partial x_2 \partial x_1} & \frac{\partial^2 y}{\partial x_2^2} \end{bmatrix} \\] Finding partial derivatives for the matrix elements: 1. \(\frac{\partial^2 y}{\partial x_1^2} = 56x_1^6\) 2. \(\frac{\partial^2 y}{\partial x_2^2} = 56x_2^6\) 3. \(\frac{\partial^2 y}{\partial x_1 \partial x_2} = \frac{\partial^2 y}{\partial x_2 \partial x_1} = 0\) Our Hessian matrix looks like this: \\[ H = \begin{bmatrix}56x_1^6 & 0 \\ 0 & 56x_2^6\end{bmatrix} \\] Since \(H\) is a diagonal matrix with nonnegative elements, it is negative semidefinite. Therefore, the multivariate power function \(y=f(x_{1}, x_{2})=(x_{1})^{8}+(x_{2})^{8}\) is concave and by Problem 2.9, it's also quasi-concave. The fact that \(f_{12} = f_{21} = 0\) makes the determination of concavity simple because there are no mixed partial derivative terms, which simplifies the condition for a function to be concave.
03

Monotonic Transformation and Concavity

To analyze the effect of the monotonic transformation on concavity, let's find the Hessian matrix of the transformed function \(g(x_{1}, x_{2}) = [\left(x_{1}\right)^{8}+\left(x_{2}\right)^{8}]^{\gamma}\). Applying the chain rule to compute partial derivatives: 1. \(\frac{\partial^2 g}{\partial x_1^2} = \gamma(\gamma - 1)\left[x_1^{8} + x_2^{8}\right]^{\gamma - 2}(56x_1^6)\), 2. \(\frac{\partial^2 g}{\partial x_2^2} = \gamma(\gamma - 1)\left[x_1^{8} + x_2^{8}\right]^{\gamma - 2}(56x_2^6)\), 3. \(\frac{\partial^2 g}{\partial x_1 \partial x_2} = \frac{\partial^2 g}{\partial x_2 \partial x_1} = 0\). The transformed Hessian matrix is: \\[ H_g = \begin{bmatrix} \gamma(\gamma - 1)\left[x_1^8 + x_2^8\right]^{\gamma - 2}(56x_1^6) & 0 \\ 0 & \gamma(\gamma - 1)\left[x_1^8 + x_2^8\right]^{\gamma - 2}(56x_2^6) \end{bmatrix} \\] To preserve concavity, the Hessian matrix of \(g\) must be negative semidefinite. Considering all elements of the Hessian matrix, it is negative semidefinite only if \(\gamma(\gamma - 1) \le 0\), which implies either \(\gamma \le 0\) or \(1\le\gamma\). However, we were given that \(\gamma\) is a positive constant, so the only case preserving concavity is when \(1\le\gamma\). For quasi-concavity, the Hessian matrix's signs are not strictly determined, so the transformed function may still be quasi-concave regardless of the value of \(\gamma\).

Unlock Step-by-Step Solutions & Ace Your Exams!

  • Full Textbook Solutions

    Get detailed explanations and key concepts

  • Unlimited Al creation

    Al flashcards, explanations, exams and more...

  • Ads-free access

    To over 500 millions flashcards

  • Money-back guarantee

    We refund you if you fail your exam.

Over 30 million students worldwide already upgrade their learning with Vaia!

One App. One Place for Learning.

All the tools & learning materials you need for study success - in one app.

Get started for free

Most popular questions from this chapter

Here are a few useful relationships related to the covariance of two random variables, \(x_{1}\) and \(x_{2}\) a show that \(\operatorname{Cov}\left(x_{1}, x_{2}\right)=E\left(x_{1} x_{2}\right)-E\left(x_{1}\right) E\left(x_{2}\right) .\) An important implication of this is that if \(\operatorname{Cov}\left(x_{1}, x_{2}\right)=0, E\left(x_{1} x_{2}\right)\) \(=E\left(x_{1}\right) E\left(x_{2}\right) .\) That is, the expected value of a product of two random variables is the product of these variables' expected values. b. Show that \(\operatorname{Var}\left(a x_{1}+b x_{2}\right)=a^{2} \operatorname{Var}\left(x_{1}\right)+b^{2} \operatorname{Var}\left(x_{2}\right)+2 a b \operatorname{Cov}\left(x_{1}, x_{2}\right)\) c. In Problem \(2.15 \mathrm{d}\) we looked at the variance of \(X=k x_{1}+(1-k) x_{2} \quad 0 \leq k \leq 1 .\) Is the conclusion that this variance is minimized for \(k=0.5\) changed by considering cases where \(\operatorname{Cov}\left(x_{1}, x_{2}\right) \neq 0 ?\) d. The correlation coefficient between two random variables is defined as \\[ \operatorname{Corr}\left(x_{1}, x_{2}\right)=\frac{\operatorname{Cov}\left(x_{1}, x_{2}\right)}{\sqrt{\operatorname{Var}\left(x_{1}\right) \operatorname{Var}\left(x_{2}\right)}} \\] Explain why \(-1 \leq \operatorname{Corr}\left(x_{1}, x_{2}\right) \leq 1\) and provide some intuition for this result. e. Suppose that the random variable \(y\) is related to the random variable \(x\) by the linear equation \(y=\alpha+\beta x\). Show that \\[ \beta=\frac{\operatorname{Cov}(y, x)}{\operatorname{Var}(x)} \\] Here \(\beta\) is sometimes called the (theoretical) regression coefficient of \(y\) on \(x\). With actual data, the sample analog of this expression is the ordinary least squares (OLS) regression coefficient.

One of the most important functions we will encounter in this book is the Cobb-Douglas function: \\[ y=\left(x_{1}\right)^{\alpha}\left(x_{2}\right)^{\beta} \\] where \(\alpha\) and \(\beta\) are positive constants that are each less than 1 a. Show that this function is quasi-concave using a "brute force" method by applying Equation 2.114 b. Show that the Cobb-Douglas function is quasi-concave by showing that any contour line of the form \(y=c\) (where \(c\) is any positive constant is convex and therefore that the set of points for which \(y>c\) is a convex set. c. Show that if \(\alpha+\beta>1\) then the Cobb-Douglas function is not concave (thereby illustrating again that not all quasiconcave functions are concave). Note: The Cobb-Douglas function is discussed further in the Extensions to this chapter.

Suppose a firm's total revenues depend on the amount produced ( \(q\) ) according to the function \\[ R=70 q-q^{2} \\] Total costs also depend on \(q\) \\[ C=q^{2}+30 q+100 \\] a. What level of output should the firm produce to maximize profits \((R-C) ?\) What will profits be? b. Show that the second-order conditions for a maximum are satisfied at the output level found in part (a). c. Does the solution calculated here obey the "marginal revenue equals marginal cost" rule? Explain.

The definition of the variance of a random variable can be used to show a number of additional results. a. Show that \(\operatorname{Var}(x)=E\left(x^{2}\right)-[E(x)]^{2}\) b. Use Markov's inequality (Problem \(2.14 \mathrm{d}\) ) to show that if \(x\) can take on only non-negative values, \\[ P\left[\left(x-\mu_{x}\right) \geq k\right] \leq \frac{\sigma_{x}^{2}}{k^{2}} \\] This result shows that there are limits on how often a random variable can be far from its expected value. If \(k=h \sigma\) this result also says that \\[ P\left[\left(x-\mu_{x}\right) \geq h \sigma\right] \leq \frac{1}{h^{2}} \\]. Therefore, for example, the probability that a random variable can be more than two standard deviations from its expected value is always less than \(0.25 .\) The theoretical result is called Chebyshev's inequality. c. Equation 2.197 showed that if two (or more) random variables are independent, the variance of their sum is equal to the sum of their variances. Use this result to show that the sum of \(n\) independent random variables, each of which has expected value \(\mu\) and variance \(\sigma^{2},\) has expected value \(m \mu\) and variance \(n \sigma^{2}\). Show also that the average of these \(n\) random variables (which is also a random variable) will have expected value \(\mu\) and variance \(\sigma^{2} / n\). This is sometimes called the law of large numbers-that is, the variance of an average shrinks down as more independent variables are included. d. Use the result from part (c) to show that if \(x_{1}\) and \(x_{2}\) are independent random variables each with the same expected value and variance, the variance of a weighted average of the two \(X=k x_{1}+(1-k) x_{2}, 0 \leq k \leq 1\) is minimized when \(k=0.5\) How much is the variance of this sum reduced by setting \(k\) properly relative to other possible values of \(k\) ? e. How would the result from part (d) change if the two variables had unequal variances?

Suppose that a firm has a marginal cost function given by \(M C(q)=q+1\) What is this firm's total cost function? Explain why total costs are known only up to a constant of integration, which represents fixed costs. b. As you may know from an earlier economics course, if a firm takes price ( \(p\) ) as given in its decisions then it will produce that output for which \(p=M C(q)\). If the firm follows this profit-maximizing rule, how much will it produce when \(p=15 ?\) Assuming that the firm is just breaking even at this price, what are fixed costs? c. How much will profits for this firm increase if price increases to \(20 ?\) d. Show that, if we continue to assume profit maximization, then this firm's profits can be expressed solely as a function of the price it receives for its output. e. Show that the increase in profits from \(p=15\) to \(p=20\) can be calculated in two ways: (i) directly from the equation derived in part (d); and (ii) by integrating the inverse marginal cost function \(\left[M C^{-1}(p)=p-1\right]\) from \(p=15\) to \(p=20\) Explain this result intuitively using the envelope theorem.

See all solutions

What do you think about this solution?

We value your feedback to improve our textbook solutions.

Study anywhere. Anytime. Across all devices.

Sign-up for free