Tests for Random Numbers
Testing for uniformity:
Failure to reject the null hypothesis,
H0, means that
non-uniformity has not been detected.
Testing for independence:
Failure to reject the null hypothesis, H0, means that evidence of
dependence has not been detected.
Level of significance α, the probability of rejecting H0 when it
α = P(reject H0|H0 is true)
When to use these tests:
If a well-known simulation languages or random-number
generators is used, it is probably unnecessary to test
If the generator is not explicitly known or documented, e.g.,
spreadsheet programs, symbolic/numerical calculators, tests
should be applied to many sample numbers.
Types of tests:
Theoretical tests: evaluate the choices of m, a, and c without
actually generating any numbers
Empirical tests: applied to actual sequences of numbers
produced. Our emphasis.
Frequency Tests [Tests for RN]
Test of uniformity
Two different methods:
Kolmogorov-Smirnov Test [Frequency Test]
Compares the continuous cdf, F(x), of the uniform
distribution with the empirical cdf, SN(x), of the N sample
We know: F(x) = x, 0 ≤ x ≤1
If the sample from the RN generator is , then the
empirical cdf, SN(x) is:
Based on the statistic: D = max| F(x) - SN(x)|
Sampling distribution of D is known (a function of N, tabulated in
A more powerful test, recommended.
Example: Suppose 5 generated numbers are 0.44, 0.81, 0.14,
For α = 0.05,
= 0.565 > D
Hence, H0 is not rejected.
Chi-square test [Frequency Test]
Chi-square test uses the sample statistic:
Approximately the chi-square distribution with n-1
freedom (where the critical values are tabulated in Table A.6)
For the uniform distribution, Ei, the expected number in the each
,where N is the total # of observation
Valid only for large samples, e.g. N >= 50
Tests for Autocorrelation [Tests for RN]
Testing the autocorrelation between every m numbers
(m is a.k.a. the lag)
The autocorrelation between numbers:
M is the largest integer such that i +(M +1)m ≤ N
if numbers are independent
, if numbers are dependent
If the values are uncorrelated:
For large values of M, the distribution of the estimator
denoted is approximately normal.
Test statistics is:
Z0 is distributed normally with mean = 0 and variance =
If > 0, the subsequence has positive autocorrelation
High random numbers tend to be followed by high ones, and vice versa.
If < 0, the subsequence has negative autocorrelation
Low random numbers tend to be followed by high ones, and vice versa.
Shortcomings [Test for Autocorrelation]
The test is not very sensitive for small values of M,
particularly when the numbers being tests are on the low
Problem when “fishing” for autocorrelation by performing
If α = 0.05, there is a probability of 0.05 of rejecting a true
If 10 independence sequences are examined,
The probability of finding no significant autocorrelation, by
chance alone, is 0.9510 = 0.60.
Hence, the probability of detecting significant autocorrelation
when it does not exist = 40%