# Statistical tests for whether a given set of independent, identically distributed draws comes from a specified probability density

See allHide authors and affiliations

Communicated by Vladimir Rokhlin, Yale University, New Haven, CT, June 14, 2010 (received for review May 24, 2010)

## Abstract

We discuss several tests for determining whether a given set of independent and identically distributed (i.i.d.) draws does not come from a specified probability density function. The most commonly used are Kolmogorov–Smirnov tests, particularly Kuiper’s variant, which focus on discrepancies between the cumulative distribution function for the specified probability density and the empirical cumulative distribution function for the given set of i.i.d. draws. Unfortunately, variations in the probability density function often get smoothed over in the cumulative distribution function, making it difficult to detect discrepancies in regions where the probability density is small in comparison with its values in surrounding regions. We discuss tests without this deficiency, complementing the classical methods. The tests of the present paper are based on the plain fact that it is unlikely to draw a random number whose probability is small, provided that the draw is taken from the same distribution used in calculating the probability (thus, if we draw a random number whose probability is small, then we can be confident that we did not draw the number from the same distribution used in calculating the probability).

A basic task in statistics is to ascertain whether a given set of independent and identically distributed (i.i.d.) draws *X*_{1},*X*_{2},…,*X*_{n-1},*X*_{n} does not come from a distribution with a specified probability density function *p* (the null hypothesis is that *X*_{1},*X*_{2},…,*X*_{n-1},*X*_{n} do in fact come from the specified *p*). In the present paper, we consider the case when *X*_{1},*X*_{2},…,*X*_{n-1},*X*_{n} are real valued. In this case, the most commonly used approach is due to Kolmogorov and Smirnov (with a popular modification by Kuiper); see, for example, Sections 14.3.3 and 14.3.4 of ref. 1, refs. 2 and 3, or *Test Statistics* below.

The Kolmogorov–Smirnov approach considers the size of the discrepancy between the cumulative distribution function for *p* and the empirical cumulative distribution function defined by *X*_{1},*X*_{2},…,*X*_{n-1},*X*_{n} (see, for example, *Notation* and *Test Statistics* below for definitions of cumulative distribution functions and empirical cumulative distribution functions). If the i.i.d. draws *X*_{1},*X*_{2},…,*X*_{n-1},*X*_{n} used to form the empirical cumulative distribution function are taken from the probability density function *p* used in the Kolmogorov–Smirnov test, then the discrepancy is small. Thus, if the discrepancy is large, then we can be confident that *X*_{1},*X*_{2},…,*X*_{n-1},*X*_{n} do not come from a distribution with probability density function *p*.

However, the size of the discrepancy between the cumulative distribution function for *p* and the empirical cumulative distribution function constructed from the i.i.d. draws *X*_{1},*X*_{2},…,*X*_{n-1},*X*_{n} does not always signal that *X*_{1},*X*_{2},…,*X*_{n-1},*X*_{n} do not arise from a distribution with the specified probability density function *p*, even when *X*_{1},*X*_{2},…,*X*_{n-1},*X*_{n} do not in fact arise from *p*. In some cases, *n* has to be absurdly large for the discrepancy to be significant. It is easy to see why:

The cumulative distribution function is an indefinite integral of the probability density function *p*. Therefore, the cumulative distribution function is a smoothed version of the probability density function; focusing on the cumulative distribution function rather than *p* itself makes it harder to detect discrepancies in regions where *p* is small in comparison with its values in surrounding regions. For example, consider the probability density function *p* depicted in Fig. 1 (a “tent” with a narrow triangle removed at its apex) and the probability density function *q* depicted in Fig. 2 below (nearly the same tent, but with the narrow triangle intact, not removed). The cumulative distribution functions for *p* and *q* are very similar, so tests of the classical Kolmogorov–Smirnov type have trouble signaling that i.i.d. draws taken from *q* are actually not taken from *p*. Section 14.3.4 of ref. 1 highlights this problem and a strategy for its solution, hence motivating us to write the present article.

We propose to supplement tests of the classical Kolmogorov–Smirnov type with tests for whether any of the values *p*(*X*_{1}),*p*(*X*_{2}),…,*p*(*X*_{n-1}),*p*(*X*_{n}) are small. If any of these values are small, then we can be confident that the i.i.d. draws *X*_{1},*X*_{2},…,*X*_{n-1},*X*_{n} did not arise from the probability density function *p*. Theorem 3 below formalizes the notion of any of *p*(*X*_{1}),*p*(*X*_{2}),…,*p*(*X*_{n-1}),*p*(*X*_{n}) being small. We also propose another complementary test, which amounts to using the Kolmogorov–Smirnov approach after “rearranging” the probability density function *p* so that it is nondecreasing on the shortest interval outside which it vanishes (see Remark 1 and Eq. **4** below).

For descriptions of other generalizations of and alternatives to the Kolmogorov–Smirnov approach (concerning issues distinct from those treated in the present paper), see, for example, Sections 14.3.3 and 14.3.4 of ref. 1 or refs. 2–13 and their compilations of references. For a more general approach, based on customizing statistical tests for problem-specific families of alternative hypotheses, see ref. 14. Below, we compare the test statistics of the present article with one of the most commonly used test statistics of the Kolmogorov–Smirnov type, namely Kuiper’s (see, for example, refs. 2 and 3 or *Test Statistics* below). We recommend using the test statistics of the present paper in conjunction with the Kuiper statistic, to be conservative, as all these statistics complement each other, helping compensate for their inevitable deficiencies.

There are at least two canonical applications. First, the tests of the present article can be suitable for checking for malfunctions with and bugs in computer codes that are supposed to generate pseudorandom i.i.d. draws from specified probability density functions (especially the complicated ones encountered frequently in practice). Good software engineering requires such independent tests for helping validate that computer codes produce correct results (of course, such validations do not obviate careful, structured programming, but are instead complementary). Second, many theories from physics and physical chemistry predict (often a priori) the probability density functions from which experiments are supposed to be taking i.i.d. draws. The tests of the present paper can be suitable for ruling out erroneous theories of this type, on the basis of experimental data. Moreover, there are undoubtedly many other potential applications, in addition to these two.

For definitions of the notation used throughout this article, see *Notation* below. *Test Statistics* introduces several statistical tests. *Numerical Examples* illustrates the power of the statistical tests. *Conclusions and Generalizations* draws some conclusions and proposes directions for further work.

All tests used in the present paper do not require any intervention by the user of suitable software implementations. The tests are not a panacea; all such tests have the drawbacks discussed in ref. 14. See ref. 14 for a much more flexible alternative, allowing the user to amend tests to be more powerful against user-specified parametric families of alternative hypotheses.

## Notation

In this section, we set notation used throughout the present paper.

We use **P** to take the probability of an event. We say that *p* is a probability density function to mean that *p* is a (Lebesgue-measurable) function from to [0,∞) such that the integral of *p* over is 1.

The cumulative distribution function *P* for a probability density function *p* is [1]for any real number *x*. If *X* is a random variable distributed according to *p*, then *P*(*x*) is just the probability that *X* ≤ *x*. Therefore, if *X* is a random variable distributed according to *p*, then the cumulative distribution function for *p*(*X*) is [2]the probability that *p*(*X*) ≤ *x*.

For reference, we summarize our (reasonably standard) notational conventions in Table 1.

The “nonincreasing rearrangement” (or nondecreasing rearrangement) of a probability density function (see, for example, Section V.3 of ref. 15) clarifies the meaning of the distribution function defined in Eq. **2**. With *P* defined in Eq. **1** and defined in Eq. **2**, for any real number *x* in the shortest interval outside which the probability density function *p* vanishes, as long as *p* is increasing on that shortest interval.

## Test Statistics

In this section, we introduce several statistical tests.

One test of whether i.i.d. draws *X*_{1},*X*_{2},…,*X*_{n-1},*X*_{n} do not arise from a specified probability density function *p* is the Kolmogorov–Smirnov test (or Kuiper’s often preferable variation). If *X* is a random variable distributed according to *p*, then another test is to use the Kolmogorov–Smirnov or Kuiper test for the random variable *p*(*X*), whose cumulative distribution function is in Eq. **2**. The test statistic for the original Kuiper test is [3]where is the empirical cumulative distribution function—the number of *k* such that *X*_{k} ≤ *x*, divided by *n*. The test statistic for the Kuiper test for *p*(*X*) is therefore [4]where is the number of *k* such that *p*(*X*_{k}) ≤ *x*, divided by *n*. Remark 1 above and Remark 4 below provide some motivation for using *V*, beyond its being a natural variation on *U*.

The rationale for using statistics such as *U* and *V* is the following theorem, corollary, and the ensuing discussion (see, for example, Sections 14.3.3 and 14.3.4 of ref. 1 or refs. 2 and 3 for proofs and details).

Suppose that *p* is a probability density function, *X* is a random variable distributed according to *p*, and *P* is the cumulative distribution function for *X* from Eq. **1**. Then, the distribution of *P*(*X*) is the uniform distribution over [0, 1].

Suppose that *p* is a probability density function, *X* is a random variable distributed according to *p*, and is the cumulative distribution function for *p*(*X*) from Eq. **2**. Then, the cumulative distribution function of is less than or equal to the cumulative distribution function of the uniform distribution over [0, 1]. Moreover, the distribution of is the uniform distribution over [0, 1] if is a continuous function ( is a continuous function when, for every nonnegative real number *y*, the probability that *p*(*X*) = *y* is 0).

Theorem 1 generalizes to the fact that, if the i.i.d. draws *X*_{1},*X*_{2},…,*X*_{n-1},*X*_{n} arise from the probability density function *p* involved in the definition of *U* in Eq. **3**, then the distribution of *U* does not depend on *p*; the distribution of *U* is the same for any *p*. With high probability, *U* is not much greater than 1 when the i.i.d. draws *X*_{1},*X*_{2},…,*X*_{n-1},*X*_{n} used in the definition of *U* in Eq. **3** are taken from the distribution whose probability density function *p* and cumulative distribution function *P* are used in the definition of *U*. Therefore, if the statistic *U* that we compute turns out to be substantially greater than 1, then we can have high confidence that the i.i.d. draws *X*_{1},*X*_{2},…,*X*_{n-1},*X*_{n} were not taken from the distribution whose probability density function *p* and cumulative distribution function *P* were used in the definition of *U*. Similarly, if *V* defined in Eq. **4** turns out to be substantially greater than 1, then we can have high confidence that the i.i.d. draws *X*_{1},*X*_{2},…,*X*_{n-1},*X*_{n} were not taken from the distribution whose probability density function *p* and distribution function were used in the definition of *V*. For details, see, for example, Sections 14.3.3 and 14.3.4 of ref. 1 or refs. 2 and 3.

A third test statistic is [5]The following theorem (which follows immediately from Corollary 2) and ensuing discussion characterize *W* and its applications.

Suppose that *p* is a probability density function, *n* is a positive integer, *X*_{1},*X*_{2},…,*X*_{n-1},*X*_{n} are i.i.d. random variables each distributed according to *p*, is the cumulative distribution function for *p*(*X*_{1}) from Eq. **2**, and *W* is the random variable defined in Eq. **5**. Then, [6]for any *x*∈[0,*n*].

For any positive real number *α* < 1/2, we define [7]if *W* ≤ *x*_{α}, then due to Eq. **6** we can have at least [100(1 - *α*)]% confidence that the i.i.d. draws *X*_{1},*X*_{2},…,*X*_{n-1},*X*_{n} do not arise from *p*. It follows from Eq. **7** that [8]with *x*_{α} = *α* for *n* = 1, and . Therefore, if *W* ≤ *α*, then we have at least [100(1 - *α*)]% confidence that the i.i.d. draws *X*_{1},*X*_{2},…,*X*_{n-1},*X*_{n} do not arise from *p*. Taking *α* = .01, for example, we have at least 99% confidence that the i.i.d. draws *X*_{1},*X*_{2},…,*X*_{n-1},*X*_{n} do not arise from *p*, if *W* ≤ .01.

If *W* defined in Eq. **5** is at most 1, then we can have at least [100(1 - *W*)]% confidence that the i.i.d. draws *X*_{1},*X*_{2},…,*X*_{n-1},*X*_{n} do not arise from the probability density function *p* used in Eq. **5**.

Using *W* defined in Eq. **5** along with the upper bound in Eq. **6** is optimal when the probability density function *p* takes on only finitely many values, or when *p* has the property that, for every nonnegative real number *y*, the probability is 0 that *p*(*X*) = *y*, where *X* is a random variable distributed according to *p*. In both cases, the inequality in Eq. **6** becomes the equality [9]for any .

When the statistic *W* defined in Eq. **5** is not powerful enough to discriminate between two particular distributions, then a natural alternative is the average [10]The Kuiper test statistic *V* defined in Eq. **4** is a more refined version of this alternative, and we recommend using *V* instead of , in conjunction with the use of *W* and *U* defined in Eq. **3**. We could also consider more general averages of the form [11]where *f* and *g* are functions; obvious candidates include *f*(*x*) = exp(*x*) and *g*(*x*) = ln(*x*), and *f*(*x*) = 1 - *x*^{1/q} and *g*(*x*) = (1 - *x*)^{q}, with *q*∈(1,∞).

## Numerical Examples

In this section, we illustrate the effectiveness of the test statistics of the present paper via several numerical experiments. For each experiment, we compute the statistics *U*, *V*, and *W* defined in Eqs. **3**, **4**, and **5** for two sets of i.i.d. draws, first for i.i.d. draws *X*_{1},*X*_{2},…,*X*_{n-1},*X*_{n} taken from the distribution whose probability density function *p*, cumulative distribution function *P*, and distribution function are used in the definitions of *U*, *V*, and *W* in Eqs. **3**, **4**, and **5**, and second for i.i.d. draws *X*_{1},*X*_{2},…,*X*_{n-1},*X*_{n} taken from a different distribution.

The test statistics *U* and *V* defined in Eqs. **3** and **4** are the same, except that *U* concerns a random variable *X* drawn from a probability density function *p*, while *V* concerns *p*(*X*). We can directly compare the values of *U* and *V* for various distributions in order to gauge their relative discriminative powers. Ideally, *U* and *V* should not be much greater than 1 when the i.i.d. draws *X*_{1},*X*_{2},…,*X*_{n-1},*X*_{n} used in the definitions of *U* and *V* in Eqs. **3** and **4** are taken from the distribution whose probability density function *p*, cumulative distribution function *P*, and distribution function are used in the definitions of *U* and *V*; *U* and *V* should be substantially greater than 1 when the i.i.d. draws *X*_{1},*X*_{2},…,*X*_{n-1},*X*_{n} are taken from a different distribution, to signal the difference between the common distribution of each of *X*_{1},*X*_{2},…,*X*_{n-1},*X*_{n} and the distribution whose probability density function *p*, cumulative distribution function *P*, and distribution function are used in the definitions of *U* and *V*.

For details concerning the interpretation of and significance levels for the Kuiper test statistics *U* and *V* defined in Eqs. **3** and **4**, see Sections 14.3.3 and 14.3.4 of ref. 1 or refs. 2 and 3; both one- and two-tailed hypothesis tests are available, for any finite number *n* of draws *X*_{1},*X*_{2},…,*X*_{n-1},*X*_{n}, and also in the limit of large *n*. In short, if *X*_{1},*X*_{2},…,*X*_{n-1},*X*_{n} are i.i.d. random variables drawn according to a continuous cumulative distribution function *P*, then the complementary cumulative distribution function of *U* defined in Eq. **3** for the same cumulative distribution function *P* has an upper tail that decays nearly as fast as the complementary error function. Although the details are complicated (varying with *n* and with the form—one-tailed or two-tailed—of the hypothesis test), the probability that *U* is greater than 2 is at most 1% when *X*_{1},*X*_{2},…,*X*_{n-1},*X*_{n} used in Eq. **3** are drawn according to the same cumulative distribution function *P* as used in Eq. **3**.

As described in Remark 2, the interpretation of the test statistic *W* defined in Eq. **5** is simple: If *W* defined in Eq. **5** is at most 1, then we can have at least [100(1 - *W*)]% confidence that the i.i.d. draws *X*_{1},*X*_{2},…,*X*_{n-1},*X*_{n} do not arise from the probability density function *p* used in Eq. **5**.

Tables 2⇓⇓–5 display numerical results for the examples described in the subsections below. The following list describes the headings of the tables:

*n*is the number of i.i.d. draws*X*_{1},*X*_{2},…,*X*_{n-1},*X*_{n}taken to form the statistics*U*,*V*, and*W*defined in Eqs.**3**,**4**, and**5**.*U*_{0}is the statistic*U*defined in Eq.**3**, with the*X*_{1},*X*_{2},…,*X*_{n-1},*X*_{n}defining in Eq.**3**drawn from a distribution with the same cumulative distribution function*P*as used in Eq.**3**. Ideally,*U*_{0}should be small, not much larger than 1.*U*_{1}is the statistic*U*defined in Eq.**3**, with the*X*_{1},*X*_{2},…,*X*_{n-1},*X*_{n}defining in Eq.**3**drawn from a distribution with a cumulative distribution function that is different from*P*used in Eq.**3**. Ideally,*U*_{1}should be large, substantially greater than 1, to signal the difference between the common distribution of each of*X*_{1},*X*_{2},…,*X*_{n-1},*X*_{n}and the distribution with the cumulative distribution function*P*used in Eq.**3**. The numbers in parentheses in the tables indicate the order of magnitude of the significance level for rejecting the null hypothesis, that is, for asserting that the draws*X*_{1},*X*_{2},…,*X*_{n-1},*X*_{n}do not arise from*P*.*V*_{0}is the statistic*V*defined in Eq.**4**, with the*X*_{1},*X*_{2},…,*X*_{n-1},*X*_{n}defining in Eq.**4**drawn from a distribution with the same probability density function*p*used for and for in Eq.**4**. Ideally,*V*_{0}should be small, not much larger than 1.*V*_{1}is the statistic*V*defined in Eq.**4**, with the*X*_{1},*X*_{2},…,*X*_{n-1},*X*_{n}defining in Eq.**4**drawn from a distribution that is different from the distribution with the probability density function*p*used for and for in Eq.**4**. Ideally,*V*_{1}should be large, substantially greater than 1, to signal the difference between the common distribution of each of*X*_{1},*X*_{2},…,*X*_{n-1},*X*_{n}and the distribution with the probability density function*p*used for and for in Eq.**4**. The numbers in parentheses in the tables indicate the order of magnitude of the significance level for rejecting the null hypothesis, that is, for asserting that the draws*X*_{1},*X*_{2},…,*X*_{n-1},*X*_{n}do not arise from*p*. We used ref. 2 to estimate the significance level; this estimate can be conservative for*V*.*W*_{0}is the statistic*W*defined in Eq.**5**, with the*X*_{1},*X*_{2},…,*X*_{n-1},*X*_{n}in Eq.**5**drawn from a distribution with the same probability density function*p*and distribution function in Eq.**5**. Ideally,*W*_{0}should not be much less than 1.*W*_{1}is the statistic*W*defined in Eq.**5**, with the*X*_{1},*X*_{2},…,*X*_{n-1},*X*_{n}in Eq.**5**drawn from a distribution that is different from the distribution with the probability density function*p*used in Eq.**5**(*p*is used both directly and for defining the distribution function in Eq.**5**). Ideally,*W*_{1}should be small, substantially less than 1, to signal the difference between the common distribution of each of*X*_{1},*X*_{2},…,*X*_{n-1},*X*_{n}and the distribution with the probability density function*p*used in Eq.**5**. The value of*W*_{1}itself is the significance level for rejecting the null hypothesis, that is, for asserting that the draws*X*_{1},*X*_{2},…,*X*_{n-1},*X*_{n}do not arise from*p*.

### A Sawtooth Wave.

The probability density function *p* for our first example is [12]for any .

We compute the statistics *U*, *V*, and *W* defined in Eqs. **3**, **4**, and **5** for two sets of i.i.d. draws, first for i.i.d. draws distributed according to *p* defined in Eq. **12**, and then for i.i.d. draws from the uniform distribution on (0, 1000). Table 2 displays numerical results.

For this example, the classical Kuiper statistic *U* is unable to signal that the draws from the uniform distribution do not arise from *p* defined in Eq. **12** for *n* ≤ 10^{7}, at least not nearly as well as the modified Kuiper statistic *V*, which signals the discrepancy with very high confidence for *n*≥10^{3}. The statistic *W* signals the discrepancy with high confidence for *n*≥10^{3}, too.

### A Step Function.

The probability density function *p* for our second example is a step function (a function that is constant on each interval in a particular partition of the real line into finitely many intervals). In particular, we define [13]for any .

We compute the statistics *U*, *V*, and *W* defined in Eqs. **3**, **4**, and **5** for two sets of i.i.d. draws, first for i.i.d. draws distributed according to *p* defined in Eq. **13**, and then for i.i.d. draws from the uniform distribution on (0, 1999). Table 3 displays numerical results.

For this example, the classical Kuiper statistic *U* is unable to signal that the draws from the uniform distribution do not arise from *p* defined in Eq. **13** for *n* ≤ 10^{6}, at least not nearly as well as the modified Kuiper statistic *V*, which signals the discrepancy with high confidence for *n*≥10^{2}. The statistic *W* does not signal the discrepancy for this example.

### A Bimodal Distribution.

The probability density function *p* for our third example is [14]for any . Fig. 1 plots *p*.

We compute the statistics *U*, *V*, and *W* defined in Eqs. **3**, **4**, and **5** for two sets of i.i.d. draws, first for i.i.d. draws distributed according to *p* defined in Eq. **14**, and then for i.i.d. draws distributed according to the probability density function *q* defined via the formula [15]for any . Fig. 2 plots *q*. Table 4 displays numerical results.

For this example, the classical Kuiper statistic *U* signals that the draws from *q* defined in Eq. **15** do not arise from *p* defined in Eq. **14** for *n*≥10^{5}, and the modified Kuiper statistic *V* is inferior. The statistic *W* signals the discrepancy with high confidence for *n*≥10^{4}.

### A Differentiable Density Function.

The probability density function *p* for our fourth example is [16]for any , where *C* ≈ .4 is the positive real number chosen such that . Fig. 3 plots *p*. We evaluated numerically the corresponding cumulative distribution function *P* defined in Eq. **1**, using the Chebfun package for Matlab described in ref. 16. Fig. 4 plots *P*. We evaluated the distribution function defined in Eq. **2** using the general-purpose scheme described in the appendix of ref. 17 (which is also based on Chebfun). Fig. 5 plots .

We compute the statistics *U*, *V*, and *W* defined in Eqs. **3**, **4**, and **5** for two sets of i.i.d. draws, first for i.i.d. draws distributed according to *p* defined in Eq. **16**, and then for i.i.d. draws distributed according to the probability density function *q* defined via the formula [17]for any . Table 5 displays numerical results.

For this example, the classical Kuiper statistic *U* signals that the draws from *q* defined in Eq. **17** do not arise from *p* defined in Eq. **16** for *n*≥10^{4}, but not nearly as well as the modified Kuiper statistic *V*, which signals the discrepancy with high confidence for *n*≥10^{2}. The statistic *W* signals the discrepancy with high confidence for *n*≥10^{2}, too.

For all but the last example, the cumulative distribution function *P* defined in Eq. **1** and the distribution function defined in Eq. **2** are easy to calculate analytically; see, for example, ref. 17. However, as the last example illustrates, evaluating *P* and can in general require numerical algorithms such as the black-box schemes described in the appendix of ref. 17.

For all numerical examples reported above, at least one of the modified Kuiper statistic *V* or the statistic *W* is more powerful than the classical Kuiper statistic *U*, usually strikingly so. However, we recommend using all three statistics in conjunction, to be conservative. In fact, the statistics *V* and *W* of the present article are not able to discern certain characteristics of probability distributions that *U* can, such as the symmetry of a Gaussian. The classical Kuiper statistic *U* should be more powerful than its modification *V* for any differentiable probability density function that has only one local maximum. For a differentiable probability density function that has only one local maximum, the statistic *W* amounts to an obvious test for outliers—nothing new (and far more subtle procedures for identifying outliers are available; see, for example, refs. 18 and 19). Still, as the above examples illustrate, *V* and *W* can be helpful with probability density functions that have multiple local maxima.

## Conclusions and Generalizations

In this paper, we complemented the classical tests of the Kolmogorov–Smirnov type with tests based on the plain fact that it is unlikely to draw a random number whose probability is small, provided that the draw is taken from the same distribution used in calculating the probability (thus, if we draw a random number whose probability is small, then we can be confident that we did not draw the number from the same distribution used in calculating the probability). *Numerical Examples* above illustrates the substantial power of the supplementary tests, relative to the classical tests.

Needless to say, the method of the present paper generalizes straightforwardly to probability density functions of several variables. There are also generalizations to discrete distributions, whose cumulative distribution functions are discontinuous.

If the probability density function *p* involved in the definition of the modified Kuiper test statistic *V* in Eq. **4** takes on only finitely many values, then the confidence bounds of refs. 2 and 3 and Sections 14.3.3 and 14.3.4 of ref. 1 are conservative, yielding lower than possible confidence levels that i.i.d. draws *X*_{1},*X*_{2},…,*X*_{n-1},*X*_{n} do not arise from *p*. It is probably feasible to compute the tightest possible confidence levels (maybe without resorting to the obvious Monte Carlo method), though we may want to replace *V* with a better statistic when *p* takes on only finitely many values; for example, when *p* takes on only finitely many values, we can literally and explicitly rearrange *p* to be nondecreasing on the shortest interval outside which it vanishes, and use the Kolmogorov–Smirnov approach on the rearranged *p*.

Even so, the confidence bounds of refs. 2 and 3 and Sections 14.3.3 and 14.3.4 of ref. 1 for the modified Kuiper test statistic *V* in Eq. **4** are sharp for many probability density functions *p*. For example, the bounds are sharp if, for every nonnegative real number *y*, the probability is 0 that *p*(*X*) = *y*, where *X* is a random variable distributed according to *p*. This covers most cases of practical interest. In general, the tests of the present article are fully usable in their current forms, but may not yet be perfectly optimal for certain classes of probability distributions.

## Acknowledgments

We thank Andrew Barron, Gérard Ben Arous, Peter Bickel, Sourav Chatterjee, Leslie Greengard, Peter W. Jones, Ann B. Lee, Vladimir Rokhlin, Jeffrey Simonoff, Amit Singer, Saul Teukolsky, Larry Wasserman, and Douglas A. Wolfe.

## Footnotes

^{1}E-mail: tygert{at}courant.nyu.edu.Author contributions: M.T. performed research; and M.T. wrote the paper.

The author declares no conflict of interest.

## References

- ↵
- Press W,
- Teukolsky S,
- Vetterling W,
- Flannery B

- ↵
- Stephens MA

- ↵
- Stephens MA

*V*_{n}: Distribution and significance points. Biometrika 52:309–321. - ↵
- ↵
- ↵
- ↵
- Hollander M,
- Wolfe DA

- ↵
- ↵
- ↵
- Rayner JCW,
- Thas O,
- Best DJ

- ↵
- Reschenhofer E

- ↵
- Simonoff JS

- ↵
- Wasserman L

- ↵
- ↵
- Stein EM,
- Weiss G

- ↵
- Trefethen LN,
- Hale N,
- Platte RB,
- Driscoll TA,
- Pachón R

- ↵
- Tygert M

- ↵
- ↵

## Citation Manager Formats

## Article Classifications

- Physical Sciences
- Statistics