New Research In
Physical Sciences
Social Sciences
Featured Portals
Articles by Topic
Biological Sciences
Featured Portals
Articles by Topic
 Agricultural Sciences
 Anthropology
 Applied Biological Sciences
 Biochemistry
 Biophysics and Computational Biology
 Cell Biology
 Developmental Biology
 Ecology
 Environmental Sciences
 Evolution
 Genetics
 Immunology and Inflammation
 Medical Sciences
 Microbiology
 Neuroscience
 Pharmacology
 Physiology
 Plant Biology
 Population Biology
 Psychological and Cognitive Sciences
 Sustainability Science
 Systems Biology
The matrixvalued hypergeometric equation

Edited by Richard A. Askey, University of Wisconsin, Madison, WI (received for review December 16, 2002)
Abstract
The hypergeometric differential equation was found by Euler [Euler, L. (1769) Opera Omnia Ser. 1, 11–13] and was extensively studied by Gauss [Gauss, C. F. (1812) Comm. Soc. Reg. Sci. II 3, 123–162], Kummer [Kummer, E. J. (1836) Riene Ang. Math. 15, 39–83; Kummer, E. J. (1836) Riene Ang. Math. 15, 127–172], and Riemann [Riemann, B. (1857) K. Gess. Wiss. 7, 1–24]. The hypergeometric function known also as Gauss' function is the unique solution of the hypergeometric equation analytic at z = 0 and with value 1 at z = 0. This function, because of its remarkable properties, has been used for centuries in the whole subject of special functions. In this article we give a matrixvalued analog of the hypergeometric differential equation and of Gauss' function. One can only speculate that many of the connections that made Gauss' function a vital part of mathematics at the end of the 20th century will be shared by its matrixvalued version, discussed here.
The hypergeometric equation is a secondorder differential equation with three regular singular points. This equation was found by Euler (1) and was studied extensively by Gauss (2), Kummer (3, 4), and Riemann (5). Using a linear fractional transformation, we can place the three singularities at 0, 1, and ∞. Accordingly, the equation becomes [1] This is Euler's hypergeometric differential equation. The hypergeometric function known also as Gauss' function is defined by the hypergeometric series for z < 1 and by analytic continuation elsewhere. If c is not an integer, then _{2}F_{1}(c, a, b; z) is the only solution of Eq. 1 analytic at z = 0 and with value 1 at z = 0. See ref. 6 for a complete account on the subject.
This function, because of its remarkable properties, has been used for centuries in the whole subject of special functions. In the past 30 years, the discoveries of new special functions and of applications of special functions to new areas of mathematics have initiated a resurgence of interest in this field.
Representation theory provides a very important approach to the study of special functions. Historically this approach dates to the classical papers of Cartan (7) and Weyl (8) in which they showed that spherical harmonics arise in a natural way from a study of functions on G/K, where G is the orthogonal group in n space and K denotes the orthogonal group in (n 1) space. To get a theory applying to larger classes of special functions, it is necessary to drop the assumption that G is compact and also to consider functions not just on G/K but also on G with values in End(V), where V is any finite dimensional complex vector space. Let K̂ denote the set of all equivalence classes of complex finite dimensional irreducible representations of K; for each δ ∈ K̂, let ξ_{δ} denote the character of δ, d(δ) the degree of δ, and χ_{δ} = d(δ)ξ_{δ}. A spherical function Φ on G (see ref. 9) of type δ ∈ K̂ is a continuous function on G with values in End(V) such that Φ(e) = 1 and
The first general results were obtained by Gelfand in 1950 (10), who considered spherical functions of trivial type for Riemannian symmetric pairs (G, K); a short time thereafter the fundamental papers of Godement (11) and HarishChandra (12, 13) appeared. It turns out that the spherical functions of trivial type for a rankone Riemannian symmetric pair, when G is suitably parametrized, can be identified with hypergeometric functions. In ref. 14 one finds a detailed elaboration of this theory for any K type when the symmetric space G/K is the complex projective plane.
One can only speculate that many of the connections that made Gauss' function a vital part of mathematics at the end of the 20th century will be shared by its End(V) valued version discussed here. It is natural to wonder whether the spherical functions of any type, associated to a rankone Riemannian symmetric pair, can be expressed in terms of these matrixvalued hypergeometric functions and to study their relation with the relatively new theory of matrixvalued orthogonal polynomials.
There are two other important programs to generalize the classical hypergeometric equation. One is due to Gelfand (15) and his school and the other to Gross (16). In the first, the generalization involves scalar valued functions of several variables, whereas in the second, one is dealing with scalar valued functions of a matrix argument.
Moreover, it is worth observing that the abstract hypergeometric equation considered by Hille (17), can be written in the form of Eq. 2.
Let V be a ddimensional complex vector space. Given A, B, C ∈ End(V), let us consider the following hypergeometric equation [2] where F denotes a function on with values in V.
Let us look for solutions of the form F = z^{α}G, with G analytic at z = 0 and G(0) ≠ 0. We have From Eq. 2 the following differential equation for G follows, If we put G(z) = ∑_{n}_{≥}_{0} z^{n}G_{n}, we obtain the following recursion relation for the coefficients G_{n}. For all k ≥ 2 we have For k = 2 we have α(C + α 1)G_{0} = 0 from which we derive the following indicial equation: [3]
Let β_{1},..., β_{d} be the eigenvalues of C; then the roots of the indicial equation are α = 0, 1 β_{1},..., 1 β_{d}.
For α = 0 we obtain the solutions F of Eq. 2, analytic at z = 0, the Taylor coefficients of which are given by the recursion relation [4] If the eigenvalues of C are not 0, 1, 2,..., then the function F is characterized by its value at 0, because the matrix (k + 2)(C + k + 1) is nonsingular for all k ≥ 1.
Let us introduce the notation for all m ≥ 0 and (C, A, B)_{0} = 1. We observe that
Theorem 1.If {F_{n}}_{n}_{≥}_{0}is a sequence in V that satisfies the recursion relation (Eq.4) and , then

F_{n}_{+}_{1} = [1/(n + 1)](C + n)^{1}(A + n)(B + n)F_{n}, for all n ≥ 0, and

F_{n} = (1/n!)(C, A, B)_{n}F_{0}, for all n ≥ 0.
Proof: For (i), if we put k = 1 in Eq. 4 we get F_{1} = C^{1}ABF_{0}. Now let n ≥ 1 and set k = n 1 in Eq. 4; then For k = n 2 we obtain Therefore, which proves (i). The statement in (ii) follows directly from (i) by induction on n ≥ 0.
Definition 1: If A, B, C ∈ End(V) and no eigenvalues of C are in the set {0, 1, 2,...}, we define
In the next theorem we summarize our results on the analytic solutions at z = 0 of differential Eq. 2.
Theorem 2. If then

the functionia analytic on z < 1 with values in End(V), and

if F_{0} ∈ V, then F(z) = _{2}F_{1}(^{A,}^{B}C; z)F_{0}is a solution of the hypergeometric equationsuch that F(0) = F_{0}. Conversely any solution F analytic at z = 0 is of this form.
Let β ≠ 1 be an eigenvalue of C. Then α = 1 β is a root of indicial Eq. 3. Now we want to study a sequence {G_{k}}_{k}_{≥}_{0} satisfying [5] For k = 2, α(C + α 1)G_{0} = 0 if and only if G_{0} is an eigenvector of C of eigenvalue β.
If , then {G_{k}}_{k}_{≥}_{0} is determined by G_{0}.
Theorem 3.If {G_{n}}_{n}_{≥}_{0}is a sequence in V that satisfies the recursion relation (Eq.5), andG_{0} ∈ V is an eigenvector of C of eigenvalue β, then

G_{n}_{+}_{1} = [1/(α + n + 1)](C + α + n)^{1}(A + α + n) × (B + α + n)G_{n}, for all n ≥ 0, and

G_{n} = [1/(α + 1)_{n}](C + α, A + α, B + α)_{n}G_{0}, for all n ≥ 0.
Proof: For (i), if we put k = 1 in Eq. 5 we obtain which proves (i) for n = 0. Now for n ≥ 1 the proof continues along the same line as in Theorem 2. The statement in (ii) follows directly from (i) by induction on n ≥ 0.
Definition 2: If A, B, C ∈ End(V) and α ∉ , then we define the function Notice that for α = 0 we have
Theorem 4.If, thenis analytic on z < 1 with values in End(V).
Ifis an eigenvalue of C and G_{0} ∈ V is an eigenvector of C of eigenvalue β, thenis a solution of the hypergeometric equation Observe that when V = and C = c, A = a, B = b are complex numbers, then β = c and the above function coincides with z^{1c}_{2}F_{1}(^{ac+1,}^{bc+1} 2c; z) for G_{0} = 1.
Corollary 1.Let C be diagonalizable and let V(β_{i}) be the eigenspace of C of eigenvalue β_{i}. Let be a basis of V and let {G_{i}_{,}_{j}}_{j}be a basis of V(β_{i}) for each β_{i}. Ifand O is a simply connected region inwith 0 ∈ O, thenis a basis of the space of all solutions of the hypergeometric equationanalytic at z = 0. Moreover,is a basis of the space of all analytic solutions on O.
When V = , a differential equation of the form [6] with u, v, c ∈ , after solving a quadratic equation, becomes [7] This is not necessarily the case when dim(V) > 1. In other words, a differential equation of the form [8] with U, V, C ∈ End(V), cannot always be reduced to one of the form of Eq. 2, because a quadratic equation in a noncommutative setting as End(V) may have no solutions. Thus, it is important to notice how to obtain the solutions of Eq. 8.
If the eigenvalues of C are not 0, 1, 2,... let us introduce the sequence [C, U, V]_{n} ∈ End(V) by defining inductively for all n ≥ 0.
Definition 3: If U, V, C ∈ End(V) and no eigenvalues of C are in the set {0, 1, 2,...}, we define
If , then we define the function Notice that for α = 0 we have and that if U = 1 + A + B, V = AB, then
Now Theorem 4 and Corollary 1 generalize, mutatis mutandis, in the following way.
Theorem 5.

If, the functionis analytic on z < 1 with values in End(V).

Ifis an eigenvalue of C and G_{0} ∈ V is an eigenvector of C of eigenvalue β, thenis a solution of the differential equation
Corollary 2.Let C be diagonalizable, and let V(β_{i}) be the eigenspace of C of eigenvalue β_{i}. Letbe a basis of V, and let {G_{i}_{,}_{j}}_{j}be a basis of V(β_{i}) for each β_{i}. Ifand O is a simply connected region inwith 0 ∈ Ō, thenis a basis of the space of all solutions of the differential equationanalytic at z = 0. Moreover,is a basis of the space of all analytic solutions on O.
Example: Following Grünbaum (see ref. 18), let us consider the differential equation [9] where F denotes a function on with values in M(2, ), and X, U, V, and W are the matrices where α, β ∈ and j = 0, 1,....
The term FW in Eq. 9 forces us to consider this equation as a differential equation on functions that take values on and to consider the left and right multiplication by matrices in as linear maps in . Thus, instead of Eq. 9, we shall consider the following equivalent differential equation [10] where where I in the matrices above denotes the 2 × 2 identity matrix, and
It is easy to verify that the matrices A and B given below satisfy Ũ = 1 + A + B, T̃ = AB. The parameters x_{1}, x_{3}, y_{1}, and y_{3} are only subject to the following conditions:
It is important to notice that spec(C) = {α + 1, α + 2}, where each eigenvalue has multiplicity 2. We also remark that (A + k)(B + k) is, generically, nonsingular for k ≠ j, and that the kernel of (A + j)(B + j) is twodimensional. Thus, is not a polynomial function, as in the classical case, but nevertheless we have the following result.
Corollary 3. Differential Eq. 9 is equivalent to a hypergeometric equation of the form of Eq. 2 . Therefore, the Jacobi polynomials introduced by Grünbaum are given in terms of hypergeometric functions, namely
This gives an explicit example within the theory of matrixvalued orthogonal polynomials initiated by Krein (19).
Footnotes

↵* Email: tirao{at}mate.uncor.edu.

This paper was submitted directly (Track II) to the PNAS office.
 Received December 16, 2002.
 Accepted April 30, 2003.
 Copyright © 2003, The National Academy of Sciences
References
 ↵
Euler, L. (1769) Opera Omnia Ser.1, 1113.
 ↵
Gauss, C. F. (1812) Comm. Soc. Reg. Sci. II 3, 123162.
 ↵
Kummer, E. J. (1836) Riene Ang. Math. 15, 3983.
 ↵
Kummer, E. J. (1836) Riene Ang. Math. 15, 127172.
 ↵
Riemann, B. (1857) K. Gess. Wiss. Goett. 7, 124.
 ↵
Andrews, G., Askey, R. & Roy, R. (1999) Special Functions: Encyclopedia of Mathematics and Its Applications (Cambridge Univ. Press, Cambridge, U.K.).
 ↵
 ↵
Weyl, H. (1931) Gruppen Theorie und Quantenmechanik (Hirzel, Leipzig), 2nd Ed.
 ↵
Tirao, J. (1977) Rev. Union Mat. Argent. 28, 7598.
 ↵
Gelfand, I. (1950) Dokl. Akad. Nauk. SSSR 70, 58.
 ↵
 ↵
 ↵
 ↵
 ↵
Gelfand, I. (1986) Dokl. Akad. Nauk. SSSR 288, 1418.
 ↵
 ↵
Hille, E. (1969) Lectures on Ordinary Differential Equations (Addison–Wesley, Reading, MA).
 ↵
Grünbaum, F. A. (2003) Bull. Soc. Mat. France 127, 207214.
 ↵
Krein, M. G. (1949) Dokl. Akad. Nauk. SSSR 69, 125128.