|
Egwald Economics: Microeconomics
Cost Functions
by
Elmer G. Wiens
Egwald's popular web pages are provided without cost to users. Please show your support by joining Egwald Web Services as a Facebook Fan:
Follow Elmer Wiens on Twitter:
Cost Functions:
Cobb-Douglas Cost
| Normalized Quadratic Cost
| Translog Cost
| Diewert Cost
| Generalized CES-Translog Cost
| Generalized CES-Diewert Cost
| References and Links
J. Normalized Quadratic Cost Function
Linear Least Squares | Nonlinear Least Squares
Suppose we have a data set relating output quantities, q, to (cost minimizing) factor inputs, L, K, M, and input prices, wL, wK, and wM, and consequently data on the total cost of producing specific levels of outputs.
The three factor Normalized Quadratic (Total) Cost Function is:
C(q;wL,wK,wM) = h(q) * c(wL,wK,wM) (**)
where the returns to scale function is:
h(q) = q^(1/nu1)
a continuous, increasing function of q (q >= 1), with h(0) = 0 and h(1) = 1,
and the unit cost function is:
c(wL,wK,wM) = cL * wL + cK * wK + cM * wM
|
+ (1/2) * [dLL * (wL*wL) + dLK*(wL*wK) + dLM * (wL*wM)
+ dKL * (wK*wL) + dKK*(wK*wK) + dKM * (wK*wM)
+ dML * (wM*wL) + dMK*(wM*wK) + dMM * (wM*wM)] * (wL + wK + wM)^-1
|
linear in its twelve parameters, cL, cK, cM, dLL, dLK, dLM, dKL, dKK, dKM, dML, dMK, and dMM. Dividing the non-linear in variables, wL, wK, and wM, portion of the quadratic unit cost function by (wL + wK + wM) ensures that the unit cost function is homogeneous of degree one in prices, since c(t*wL, t*wK, t*wM) = t * c(wL,wK,wM) (Diewert and Wales, (1988), 327-342). Multiplying the unit cost function by the returns to scale function to obtain the total cost function presumes that the production technology is homothetic.
It is convenient to express the normalized unit cost function in terms of vectors and matrices.
Define the vectors:
c = [cL, cK, cM]T, w = [wL, wK, wM]T, 1 = [1, 1, 1]T, and 0 = [0, 0, 0]T.
Define the matrix D by :
D = | dLL | dLK | dLM |
dKL | dKK | dKM |
dML | dMK | dMM |
The unit cost function becomes:
c(w) = cT * w + (1/2) * wT * D * w / (1T * w),
a linear function in its parameters, c, and D.
Restrictions:
D = DT, and D * w* = 0,
where the reference vector w* = [wL*, wK*, wM*]T for the base prices, wL*, wK*, and wM*. The symmetry restriction D = DT ensures that dLK = dKL, dLM = dML, and dKM = dMK (Young's Theorem).
Consequently, as specified the Normalized Quadratic cost function apparently has 9 free parameters (actually only six free parameters — see below).
Using conventional matrix calculus (Intriligator, (1971), 497-500), the first and second order partial derivatives (Diewert and Fox, (2009), 158-164) of the unit cost function, c(w), are:
∇c(w) = c + D * w / ( 1T * w) - (1/2) * wT * D * w * 1 / (1T * w)2, and
∇2c(w) = D / ( 1T * w) - D * w * 1T / (1T * w)2 - 1 * wT * D / (1T * w)2 + wT * D * w * (1 * 1T) / (1T * w)3.
At the reference vector w*:
∇c(w*) = c,
∇2c(w*) = D / ( 1T * w*).
Shephard's lemma provides that the gradient vector of the unit cost function is the vector of unit input demand functions:
∇c(w) = [l(w), k(w), m(w)]T.
Re-write the factor prices as:
v = [vL, vK, vM] = [wL, wM, wK] / (wL + wM + wK).
The factor demands as functions of q, vL, vK, and vM are:
L(q; vL, vK, vM) / q^(1/nu1) = |
l(vL, vK, vM) = |
cL + dLL * vL + dLK * vK + dLM * vM |
- (1/2) * [dLL * vL * vL + dLK * vL * vK + dLM * vL * vM
+ dKL * vK * vL + dKK * vK * vK + dKM * vK * vM
+ dML * vM * vL + dMK * vM * vK + dMM * vM * vM]
|
K(q; vL, vK, vM) / q^(1/nu1) = |
k(vL, vK, vM) = |
cK + dKL * vL + dKK * vK + dKM * vM |
- (1/2) * [dLL * vL * vL + dLK * vL * vK + dLM * vL * vM
+ dKL * vK * vL + dKK * vK * vK + dKM * vK * vM
+ dML * vM * vL + dMK * vM * vK + dMM * vM * vM]
|
M(q; vL, vK, vM) / q^(1/nu1) = |
m(vL, vK, vM) = |
cM + dML * vL + dMK * vK + dMM * vM |
- (1/2) * [dLL * vL * vL + dLK * vL * vK + dLM * vL * vM
+ dKL * vK * vL + dKK * vK * vK + dKM * vK * vM
+ dML * vM * vL + dMK * vM * vK + dMM * vM * vM]
|
À1: Estimate the factor demand equations using linear least squares.
I. Linear least squares with restrictions on the parameter values.
For the purpose of estimating the factor demand equations, re-write these functions:
L(q; vL, vK, vM) / q^(1/nu1) = |
l(vL, vK, vM) = |
cL + dLL * vL + dLK * vK + dLM * vM |
+ [tdLL * vL * vL + tdLK * vL * vK + tdLM * vL * vM
+ tdKL * vK * vL + tdKK * vK * vK + tdKM * vK * vM
+ tdML * vM * vL + tdMK * vM * vK + tdMM * vM * vM]
|
K(q; vL, vK, vM) / q^(1/nu1) = |
k(vL, vK, vM) = |
cK + dKL * vL + dKK * vK + dKM * vM |
+ [tdLL * vL * vL + tdLK * vL * vK + tdLM * vL * vM
+ tdKL * vK * vL + tdKK * vK * vK + tdKM * vK * vM
+ tdML * vM * vL + tdMK * vM * vK + tdMM * vM * vM]
|
M(q; vL, vK, vM) / q^(1/nu1) = |
m(vL, vK, vM) = |
cM + dML * vL + dMK * vK + dMM * vM |
+ [tdLL * vL * vL + tdLK * vL * vK + tdLM * vL * vM
+ tdKL * vK * vL + tdKK * vK * vK + tdKM * vK * vM
+ tdML * vM * vL + tdMK * vM * vK + tdMM * vM * vM]
|
We can express the factor demands as functions of ten explanatory variables with their corresponding parameters:
| 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | |
Variable | constant | vL | vK | vM | vL*vL | vL*vK | vL*vM | vK*vK | vK*vM | vM*vM | |
L | cL | dLL | dLK | dLM | tdLL | tdLK | tdLM | tdKK | tdKM | tdMM | βL |
K | cK | dLK | dKK | dKM | tdLL | tdLK | tdLM | tdKK | tdKM | tdMM | βK |
L | cL | dLM | dKM | dMM | tdLL | tdLK | tdLM | tdKK | tdKM | tdMM | βM |
The construction of the variables along with their restrictions described below ensures the symmetry of the estimated matrix D, i.e. D = DT.
Suppose we have m observations on q, L , K, M, wL, wK, and wM. Let V be the m x 10 matrix of observations of the explanatory variables constructed from v = [vL,vK,vM]T. Let L, K, and M be the m-component vectors of observed factor demands. Then we can estimate the factor demand equations as a group (Johnston, (1972), 238-241) along with the necessary constraints, D = DT, D * w* = 0, and the accounting constraints among the "d__" and "td__" parameters within and between the demand equations.
This group of equations can be represented as:
W | * | β | = | F |
V | | | * | βL | = | L |
| V | | * | βK | = | K |
| | V | * | βM | = | M |
Subject to the constraints on the thirty parameters, our objective is to find the values for the vector of parameters, β, that minimizes the distance between F and W * β, i.e. the norm || W * β - F ||.
Looking at the construction of the ten variables of the matrix V, one finds that the rank of V is six, i.e. V has six independent column vectors. Therefore, when one estimates the factor demand equations as a group, the 3*m by 30 matrix of observations, W, has 18 independent column vectors out of thirty.
Moreover, the matrix of restrictions, R, on the parameters has 24 rows, i.e. R is a 24 x 30 matrix, with a rank of 24. These restrictions can be expressed as:
R * β = r,
where r is a 24 component vector. The restrictions matrix, R, and restrictions vector, r:
| cL | dLL | dLK | dLM | tdLL | tdLK | tdLM | tdKK | tdKM | tdMM | cK | dLK | dKK | dKM | tdLL | tdLK | tdLM | tdKK | tdKM | tdMM | cM | dLM | dKM | dMM | tdLL | tdLK | tdLM | tdKK | tdKM | tdMM | | r |
---|
1 | 0 | 1 | 0 | 0 | 2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | 0 |
---|
|
2 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | 0 |
---|
|
3 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | 0 |
---|
|
4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | 0 |
---|
|
5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | 0 |
---|
|
6 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | 0 |
---|
|
7 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | | 0 |
---|
|
8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | | 0 |
---|
|
9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 | | 0 |
---|
|
10 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | -1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | 0 |
---|
|
11 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | -1 | 0 | 0 | 0 | 0 | 0 | | 0 |
---|
|
12 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | -1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | 0 |
---|
|
13 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | -1 | 0 | 0 | 0 | 0 | | 0 |
---|
|
14 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | -1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | 0 |
---|
|
15 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | -1 | 0 | 0 | 0 | | 0 |
---|
|
16 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | -1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | 0 |
---|
|
17 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | -1 | 0 | 0 | | 0 |
---|
|
18 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | -1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | 0 |
---|
|
19 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | -1 | 0 | | 0 |
---|
|
20 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | -1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | 0 |
---|
|
21 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | -1 | | 0 |
---|
|
22 | 0 | 0.269 | 0.5 | 0.231 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | 0 |
---|
|
23 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.269 | 0.5 | 0.231 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | 0 |
---|
|
24 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.269 | 0.5 | 0.231 | 0 | 0 | 0 | 0 | 0 | 0 | | 0 |
---|
|
II. Objective: Solve the LSE problem:
Among all 30 component β vectors obeying:
R * β = r,
find the β vector that minimizes the norm:
|| W * β - F ||.
By construction the observation matrix W is rank deficient, making it infeasible to use the usual QR algorithm described on the web page, Linear and Restricted Multiple Regression, to solve the LS problem:
find the 30 component β vector that minimizes the norm:
|| W * β - F ||,
subject to the requirement:
R * β = r.
III.
To determine the Normalized Quadratic cost function's efficacy in estimating the cost structure of a production technology, we shall use it to approximate cost data generated by a CES Production function. The estimated parameters of the Normalized Quadratic cost function will vary with the parameters sigma, nu, alpha, beta and gamma of the CES production function.
CES Production Function:
q = A * [alpha * (L^-rho) + beta * (K^-rho) + gamma *(M^-rho)]^(-nu/rho) = f(L,K,M).
where L = labour, K = capital, M = materials and supplies, and q = product. The parameter nu is a measure of the economies of scale, while the parameter rho yields the elasticity of substitution:
sigma = 1/(1 + rho).
Set the parameters below to re-run with your own CES parameters.
Restrictions: .7 < nu < 1.3; .5 < sigma < 1.5; .25 < alpha < .45, .3 < beta < .5, .2 < gamma < .35
sigma = 1 → nu = alpha + beta + gamma (Cobb-Douglas)
sigma < 1 → inputs complements; sigma > 1 → inputs substitutes
.5 < nu1 < 2;
4 <= wL* <= 11, 7<= wK* <= 16, 4 <= wM* <= 10
|
|