# Recursive utility in a Markov environment with stochastic growth

See allHide authors and affiliations

Edited by David M. Kreps, Stanford University, Stanford, CA, and approved June 7, 2012 (received for review January 6, 2012)

## Abstract

Recursive utility models that feature investor concerns about the intertemporal composition of risk are used extensively in applied research in macroeconomics and asset pricing. These models represent preferences as the solution to a nonlinear forward-looking difference equation with a terminal condition. In this paper we study infinite-horizon specifications of this difference equation in the context of a Markov environment. We establish a connection between the solution to this equation and to an arguably simpler Perron–Frobenius eigenvalue equation of the type that occurs in the study of large deviations for Markov processes. By exploiting this connection, we establish existence and uniqueness results. Moreover, we explore a substantive link between large deviation bounds for tail events for stochastic consumption growth and preferences induced by recursive utility.

Recursive utility models of the type suggested by ref. 1 and featured in the asset-pricing literature by ref. 2 and others represent preferences as the solution to a nonlinear forward-looking difference equation with a terminal condition. Such preferences are used in economic dynamics because seemingly simple parametric versions provide a convenient device to change risk aversion while maintaining the same elasticity of intertemporal substitution. In this paper we explore infinite-horizon specifications in the context of a Markov environment. Even under the Markov specification, establishing the existence of a solution to this forward-looking recursion used to depict preferences can be challenging. (See ref. 3 for a recent thorough analysis of existence and uniqueness of continuation value processes, but the sufficient conditions given there impose restrictions that preclude some of the parametric models used in practice.) In this paper we establish a connection between the solution to this equation and to an arguably simpler eigenvalue equation of the type that occurs in the study of large deviations for Markov processes (4⇓–6).

The remainder of the paper is organized as follows. First, we state formally the recursive utility problem and a related Perron–Frobenius eigenvalue problem. We use the latter problem to construct a change in probability that plays a central role in our analysis. Under this change of measure, we establish several inequalites leading up to our main analytical result. We conclude the paper by expanding on some of the ramifications of our analysis and linking our results to the study of large deviations applied to a Markov process.

## Two Related Problems

Consider a discrete-time specification of recursive preferences of the type suggested by refs. 1 and 2. We use the homogeneous-of-degree-one aggregator specified in terms of current period consumption *C _{t}* and the continuation value

*V*,

_{t}where

adjusts the continuation value for risk. With these preferences, is the elasticity of intertemporal substitution and δ is a subjective discount rate. The parameter ζ does not alter preferences, but gives some additional flexibility, and we select it in a judicious manner.

Next exploit the homogeneity-of-degree-one specification of the aggregator Eq. **1** to obtain

Applying the aggregator requires a terminal condition for the continuation value. In what follows we consider infinite-horizon limits. Thus, we explore the construction of the continuation value as a function of .

Consider now a Markov specification in discrete time. Let be an underlying Markov process, and suppose the following:

**Assumption 1.**

*a*)*The joint distribution of**conditioned on**depends only on*.*b*)*Consumption dynamics evolve as*

In light of this restriction, we may view *X* alone as a Markov process and *Y* does not “cause” *X* in the sense of ref. 7. As suggested by a referee, the process *Y* can be viewed as an independent sequence conditioned on the entire process *X* where the conditional distribution of depends only on and . [The referee noted that an argument given on p. 1616 of ref. 8 may be extended to demonstrate this conditional independence and that we may view as a “hidden-state Markov chain” with hidden state . In our analysis is treated as directly observable, and we defer the study of hidden states in this setting to future research.]

When the joint process is stationary, the logarithm of consumption has stationary increments and the level process for consumption displays stochastic geometric growth. For convenience we normalize . Given our assumed homogeneity in preference, it is straightforward to allow for more general initial conditions. (In the special case in which *κ* does not depend on *Y _{t}*

_{+1}, the consumption process is what is called a

*multiplicative functional*in the applied mathematics literature.) This specification allows us to feature the process

*X*in our analysis while allowing for some additional flexibility. Generally, we may think of this as a convenient specification of consumption that could emerge from a model in which consumption is determined endogenously.

Given the Markov dynamics, we seek a solution:

Writing and for we can express Eq. **2** as

Remarkably, the solution to the fixed-point problem Eq. **3** is closely related to a Perron–Frobenius eigenvalue equation of the type analyzed by ref. 9 in their study of risk–return relations and risk pricing over long-term investment horizons. The eigenvalue problem studied in ref. 9 is also closely related to an eigenvalue equation that occurs in the study of large deviations. Consider the mapping:

The eigenvalue equation of interest is

for . In many specifications this equation has multiple positive solutions with eigenfunctions that are not equal up to a scale factor.

## Changing the Probability Measure

We use a Perron–Frobenius eigenfunction to change the probability measure. Associated with each such eigenfunction is a positive random variable

that has conditional expectation equal to unity. We use this variable to define a change of measure for the transition probability of the Markov process, via

for any Borel measurable function ϕ with the appropriate domain. This change in the transition probability preserves the Markov property and the restrictions imposed by Assumption 1. Only one of the eigenfunctions induces a change of measure that is stochastically stable in the sense of the following (uniqueness is established in ref. 9 for a continuous-time Markov specification, but their result has a direct counterpart for discrete-time):

**Assumption 2.** *Under the change of probability measure*,

*for any bounded Borel measurable function* . *The expectation on the right-hand side uses a stationary distribution implied by the change in the transition distribution. We require that the convergence applies for almost all Markov states x under this stationary distribution*.

There is an extensive literature that gives sufficient conditions for stochastic stability.

To apply this change in measure, we use a multiplicative scaling of functions:

The transformed counterpart to Eq. **3** is

where

and . Note that this altered recursion uses the change of measure to absorb the stochastic component to growth. Moreover,

We also consider an alternative recursion defined via an operator defined on nonnegative functions *h* given by

That is,

In particular there is a one-to-one correspondence between fixed points of and fixed points of and inequality Eq. **5** implies that if . (Ref. 3 constructs spaces weighted by scale factors that depend on time, including factors with geometric decay as a featured case. The structure presumes processes with bounded support, although the support can increase over time because of the scale factors that they introduce. In contrast, we exploit heavily a Markov structure and use the Perron–Frobenius eigenvalue embedded in our change of probability measure to accommodate geometric growth and other convenient forms of stochastic growth in consumption. The recursion, , maps into the special case of the recursions in ref. 3 for and ; and the recursion, , maps into a special case when except that we feature spaces instead of spaces.)

To maintain discounting in the presence of stochastic growth, we assume the following:

**Assumption 3.** .

In terms of the initial parameters, Assumption 3 implies

For typical parameterizations, Thus, when , this bound on δ is positive for and negative when . (It is possible that η is positive, which alters the parameter restrictions.)

## Some Useful Inequalities

In this section we establish inequalities that we use to show the existence of fixed points to and We consider alternative operators with fixed points that are easier to characterize. These alternative fixed points provide bounds for the fixed points that interest us. Starting from these bounds we construct monotone sequences that converge to candidate fixed points of and We also show when the two constructed fixed points coincide. Recall that we have the flexibility to set in an arbitrary fashion. We exploit this convenience by setting

### Inequalities for .

Suppose that and apply Jensen’s inequality to obtain

Because ,

When , relation Eq. **7** holds with the reverse inequality and raising both sides to the power preserves inequality Eq. **8**. When , relation Eq. **7** holds and raising both sides to the power gives us inequality Eq. **8** with the reverse sign. Thus, we have

where

A sufficient condition to obtain a fixed point for is the following:

**Assumption 4.**

In this case

is in (using the stationary distribution) and is a fixed point for In addition, under Assumption 4, if because inequality Eq. **8** holds, maps into

### Inequalities for .

Suppose again that and apply Jensen’s inequality to obtain

Raising both sides to the power α reverses the inequality and thus

For , the inequality in Eq. **9** remains the same and raising both sides to power α does not reverse this inequality. For the inequality in Eq. **9** is reversed and raising both sides to the power α does not reverse the inequality. Thus

where

A sufficient condition to obtain a fixed point for is the following:

**Assumption 5.**

In this case,

is a fixed point for

A consequence of Jensen’s inequality is that Assumption 4 implies Assumption 5 when and conversely for . For , they are not comparable. We can apply Jensen’s inequality to rank fixed points of the operators:

### Candidate Fixed Points for and .

We use monotonicity to construct candidate fixed points for and . We consider three cases associated with three different intervals for α.

#### .

When Assumption 5 is satisfied, and thus is a decreasing sequence of functions. This sequence converges pointwise to a function . We establish below that this limit is a fixed point for

When Assumption 4 is satisfied, we use the pointwise limit of the decreasing sequence as a candidate fixed point for Because , Taking limits as *j* tends to infinity, when Assumptions 4 and 5 are both satisfied.

#### .

In this case we impose the more restrictive Assumption 4 and use to construct a fixed point. Note that

Applying to both sides,

Repeating this argument, we see that

Because is an increasing sequence of functions that is bounded from above. This sequence converges pointwise to a function .

Because , Taking limits as *j* tends to infinity,

#### .

In this case we impose the more restrictive Assumption 5 and use to construct a decreasing sequence bounded below by a strictly positive function and thus converging pointwise to a positive function . We use to construct an increasing sequence that is bounded from above by a positive function. This sequence converges to a function with .

### Extending the Domain of Convergence.

We constructed fixed points by iterating operators starting from a specific function, say , and converging to a limit point, say , where . Consider a function *g* such that . Then Because converges to , also converges to . At least in this specific sense, the candidate fixed points are “stable.”

## Main Result

We now state and prove a result on the existence of recursive utilities in a Markov setting. The proposition collects intermediate results proved earlier and shows that the candidate fixed points are actual fixed points and that they coincide if .

**Proposition 6.** *Suppose* (*a*) * is a Markov process satisfying Assumption* 1, (*b*) *e is a solution to the Perron–Frobenius **Eq*. **4** *satisfying Assumption* 2 *with* *the associated eigenvalue*, *and* (*c*) *the subjective rate of discount satisfies* (*Assumption* 3). *Then for alternative ranges of α we have the following results*:

*i*)*If*,*is a fixed point of**provided that Assumption*5*is satisfied, and**is a fixed point of**provided that Assumption*4*is satisfied. When both assumptions are satisfied*, .*ii*)*If*,*is a fixed point of**provided that Assumption*5*is satisfied*.*iii*)*If*,*is a fixed point of**when Assumption*4*is satisfied. Moreover*,*is the unique fixed point with a finite α moment under the**stationary distribution*.

Whereas the proposition features , fixed points of are obtained by raising the fixed points of to power α. Solutions for are given by multiplying fixed points of by the eigenfunction and raising the product to the power .

We first show that the limits we constructed above are actually fixed points.

### Existence of Fixed Points.

To prove existence, we again treat three cases, depending on the magnitude of α.

#### .

If Assumption 5, holds then and is a dominated sequence of functions converging pointwise to The dominated convergence theorem guarantees that with measure one. Hence is a fixed point of

If Assumption 4 holds, then, as we showed above, and maps into . Because , where the first inequality follows from bound Eq. **5**, the dominated convergence theorem assures us that and is the strictly positive (with probability 1) limit of . From inequality Eq. **8** it follows that for each *j*, is finite. Let Because , Beppo Levi’s monotone convergence theorem thus implies that for and as a consequence for If , then and for Because and is monotone, , and thus for we also have

#### .

If Assumption 4 holds, and is a sequence of functions dominated by The remainder of the proof is as above.

#### .

If Assumption 5 holds, the proof for applies.

We next show that when , the constructed fixed points are actually the same. Again we treat separately the cases and .

Consider first the case in which . For the function is convex for Consequently for each fixed *x*, is a convex function of A subgradient for this convex function at is the linear map that maps a into and a simple calculation shows that Thus, if are nonnegative fixed points of

By the law of iterated expectations,

Because , and coincide in a set with measure 1. In particular,

Next consider the case in which . We view as a conditional norm. As a consequence, if and are fixed points of

where the last inequality follows from the (reverse) triangle inequality. Next raise both sides to the power α and then integrate with respect to the stationary distribution. By the law of iterated expectations, provided that and have finite α-moments under the stationary distribution. Thus, and must be equal with probability 1.

Because under Assumption 5 and have finite α-moments under the stationary distribution. Therefore, and coincide. In addition, is the unique fixed point of with a finite α-moment under the stationary distribution.

## Three Interesting Extensions

### Limiting Version of Asset Valuation.

Ref. 10 characterizes asset-pricing implications in the limiting case by interpreting the eigenvalue problem as the limit of a utility recursion. As is well known in the asset-pricing literature, one-period stochastic discount factors provide a convenient way to depict the “shadow prices” of one-period claims that would clear hypothetical competitive markets. See, for instance, refs. 11 and 12. The valuation of multiperiod claims can then be obtained by repeatedly applying the formula for valuation of one-period claims. The stochastic discount factor *S* for the recursive utility model satisfies

Using the implied one-period stochastic discount factor, the date *t* valuation of a claim that pays at is Iterating the operator extends pricing to claims with a longer payoff horizon. (Stochastic growth may be introduced into this valuation while preserving the same mathematical structure as in ref. 9.)

The formula for the stochastic discount factor remains well defined in the limiting case. The limit operator is given by Any positive constant is a fixed point of . One such constant is given by the limit solution to ref. 10 as ξ tends to zero, This constant corresponds to [This mathematical characterization is very similar to that of Runolfsson (13), who studies ergodic risk-sensitive control problems using eigenfunction methods. In contrast to our analysis, Runolfsson abstracts from stochastic growth, and the change of probability measure that we apply is not part of his analysis.]

Setting δ to its limit value given in Eq. **6** or equivalently , and normalizing

When the process *X* is stationary, the long-term decay of this stochastic discount factor is dominated by which is the stochastic discount factor for a model in which preferences are depicted by a time-separable power utility function with power . An equivalent depiction of the power utility specification is achieved by setting . The extra contribution of recursive utility is captured by the Perron–Frobenius eigenfunction *e*, via the term Applying methods developed in refs. 9 and 10 uses such representations to characterize permanent and transitory contributions to asset valuation and to make formal comparisons of recursive utility to power utility models of consumer preferences.

### Unitary Elasticity of Substitution.

So far we have abstracted from the case . When , we may use the recursion

where we no longer restrict *g* to be positive. This recursion is a special case of the so-called “risk-sensitive recursion” studied in refs. 14 and 15, where discounting is included in the manner suggested by ref. 16. Let

Then and has a fixed point if is finite. We may use our previous arguments to show that is a decreasing sequence, but we do not have an obvious lower bound on these iterations. When they converge to a finite valued function , this function is a fixed point of .

### Different Starting Point.

Our analysis takes as given the consumption dynamics in contrast to stochastic growth economies such as those studied in ref. 17. The change of probability measure we use is determined by the multiplicative martingale component for consumption raised to a power as discussed in refs. 9 and 10. Some stochastic growth economies with production have a balanced growth path relative to some stochastically growing technology. In such economies, the value of η and the change of measure may be deduced before solving the model. In particular, we may check the restriction by solving for η using the exogenously specified technology and the balanced-growth restriction. This restriction on δ may be viewed as an extension of ref. 18’s analysis of subjective discount rates in stochastic growth economies for models with power utility . The eigenfunction *e*, which is also restricted in our analysis, will depend on a conjectured equilibrium solution for consumption, however.

## Relation to Large Deviations

The authors of ref. 4 and others use principal eigenvalue problems as a device for computing large deviation bounds. Although their analysis allows for the construction of large deviation bounds for a large class of events, we consider bounding a rather simple set of tail events.

Following the work of ref. 19, we explore the probabilities that consumption growth will be below some growth threshold at a given date. (Ref. 19 actually investigates the behavior of portfolios over long investment horizons whereas we look at consumption growth.)

Consider the following threshold probability:

This probability is the “value at risk” that the growth rate of consumption will be less than . As we will eventually make the time horizon *t* tend to infinity, adding a constant to the threshold in Eq. **12** will be inconsequential. This computation is similar to but distinct from calculations for a class of ruin problems initiated by Cramer and Lundberg. See ref. 20 for a more refined use than what we describe here of large deviation theory to compute asymptotic ruin probabilities.

To bound the probability in Eq. **12**, we follow the usual approach to large deviations by constructing a family of functions that dominate the indicator function

for any . An implication of this domination expressed in terms of logarithms of probabilities scaled by is

where we scaled by *t*. This bound holds for all , which leads us to minimize the left-hand side with respect to θ. We study the limiting result as the time horizon becomes large. The optimized θ depends on the growth rate used in constructing the threshold of interest. We link the choice of θ to the preference parameter , and, as a consequence, the inverse problem is of interest to us. Given θ, for what value of the growth rate will this θ be the best choice for constructing a large-deviation bound?

The large *t* approximation to the left-hand side is

where is the Perron–Frobenius eigenvalue obtained by solving

To construct the best possible asymptotic bound we minimize Eq. **13** with respect to or, equivalently

which is a Legendre transform. The function η can be shown to be convex in θ as is the Legendre transform ξ. With this construction, the decay rate in the probabilities for threshold is . The first-order conditions are:

provided that η is differentiable and the distorted distribution is evaluated at the optimized value of θ. This same change in probability distribution is commonly used to verify that the upper bound just computed is also the best possible bound.

So far we have taken to be specified and we solve for θ. To build a connection to our earlier analysis of intertemporal utility functions, we now consider the inverse problem by computing a threshold that solves the optimization problem for a given θ. Suppose that and let . For each such value of γ, we compute a threshold for which the power specification for terminal consumption gives the best probability bound.

We illustrate these calculations using a specification from ref. 21 of a “long-run risk” model for consumption dynamics featured in ref. 22. The authors of ref. 22 use historical data from the United States to motivate their choice of parameters. Their model includes predictability in both conditional means and conditional volatility. We use the continuous-time specification from ref. 21 because the resulting model of stochastic volatility is more tractable. Our analysis assumes a discrete-time model. Because a continuous-time Markov process *X* observed at interval points in time remains a Markov process in discrete time, we use the implied discrete-time specification to construct preferences and analyze implications. In so doing we exploit the continuous-time quasi-analytical formulas given by ref. 10 for as an important input into our calculations.

We explore the consequences of changes in θ and implicitly for γ in Fig. 1, which depicts two curves. One curve plots the threshold for the which the value of θ on the horizontal axis is optimal. The threshold is computed as . The unconditional mean of is 0.0015, and this is equal to . We expect this outcome because the distribution of the growth rate (in logarithms) of consumption, after adjusting for mean growth rate and scaling by , obeys a central limit theorem. Positive values of θ imply larger values of , which corresponds to movements to the left tail of the distribution of . Fig. 1 also plots the implied decay rates in the probabilities of consumption over a horizon *t* exceeding the threshold . This decay rate increases in θ because the implied threshold is getting larger. For instance, when , the decay rate is 0.0104 per annum and when , the decay rate is 0.0408 per annum. The zero threshold occurs when or equivalently .

In summary, the same Perron–Frobenius problem that we use as a device to analyze the infinite-horizon recursive valuation also gives an explicit link between the preference parameter γ and large-deviation bounds for the tail behavior of the growth rate in consumption.

## Conclusions

We use Perron–Frobenius theory applied to valuation operators to (*i*) establish existence of the infinite-horizon value function for specifications of recursive utility that are commonly used in the study of economic dynamics, (*ii*) provide a limiting characterization of asset valuation that features the beliefs of economic agents about macroeconomic growth and uncertainty, and (*iii*) illustrate a connection between our analysis and research on large-deviation bounds for Markov processes.

## Acknowledgments

We benefited from discussions with H. Berestycki, A. Bhandari, X. Chen, V. Haddad, S. Komminers, E. Renault, and G. Tsiang and from comments and computational assistance from M. Hendricks.

## Footnotes

- ↵
^{1}To whom correspondence should be addressed. E-mail: joses{at}princeton.edu.

Author contributions: L.P.H. and J.A.S. designed research, performed research, and wrote the paper.

The authors declare no conflict of interest.

This article is a PNAS Direct Submission.

Freely available online through the PNAS open access option.

## References

- ↵
- ↵
- ↵
- ↵
- Donsker MD,
- Varadhan SR

- ↵
- ↵
- Kontoyiannis I,
- Meyn SP

- ↵
- ↵
- ↵
- Hansen LP,
- Scheinkman J

- ↵
- ↵
- Hansen LP,
- Richard SF

- ↵
- Cochrane JH

- ↵
- ↵
- Jacobson DH

- ↵
- Whittle P

- ↵
- Hansen LP,
- Sargent TJ

- ↵
- ↵
- ↵
- Stutzer M

- ↵
- ↵
- Hansen LP,
- Heaton JC,
- Lee J,
- Roussanov N

- ↵

## Citation Manager Formats

## Article Classifications

- Social Sciences
- Economic Sciences