Solution to Exercise 1.22:

DrScheme does not have a built-in `runtime`

procedure that I could find, so I modified the code for `timed-prime-test`

to use SRFI 19, like so

`(require (lib "19.ss" "srfi"))`

```
```(define (timed-prime-test n)

(newline)

(display n)

(start-prime-test n (current-time time-process)))

(define (start-prime-test n start-time)

(if (prime? n)

(report-prime

(time-difference

(current-time time-process)

start-time))))

`(define (report-prime elapsed-time)`

(display " *** ")

(display (time-second elapsed-time))

(display "s ")

(display (time-nanosecond elapsed-time))

(display "ns"))

I implemented `seach-for-primes`

as follows:

`(define (search-for-next-prime starting-at)`

(if (prime? starting-at)

starting-at

(search-for-next-prime (+ starting-at 2))))

```
```

`(define (search-for-primes find-n starting-at)`

(if (= find-n 0)

null

(let ((next-prime (search-for-next-prime starting-at)))

(cons next-prime

(search-for-primes (- find-n 1) (+ next-prime 2))))))

I could then run some tests:

`(define (time-prime-tests primes)`

(map timed-prime-test primes))

```
```

`(time-prime-tests (search-for-primes 3 1001))`

(time-prime-tests (search-for-primes 3 10001))

(time-prime-tests (search-for-primes 3 100001))

(time-prime-tests (search-for-primes 3 1000001))

This gave the following output:

1009 *** 0s 0ns

1013 *** 0s 0ns

1019 *** 0s 0ns

(# # #)

```
```10007 *** 0s 0ns

10009 *** 0s 0ns

10037 *** 0s 0ns

(# # #)

100003 *** 0s 0ns

100019 *** 0s 0ns

100043 *** 0s 0ns

(# # #)

`1000003 *** 0s 0ns`

1000033 *** 0s 0ns

1000037 *** 0s 0ns

(# # #)

As you can see, all of the tests for prime took no time at all. This is obviously wrong.

I have a feeling that all these processes complete in less time than a single tick of the process clock (which, on Windows, appears to have a resolution in milliseconds).

Let’s magnify the results by calling `prime?`

in `timed-prime-test`

1000 times instead of just once. Here’s the modified definition:

`(define (timed-prime-test n)`

(newline)

(display n)

(start-prime-test 1000 n (current-time time-process)))

(define (start-prime-test i n start-time)

(if (prime? n)

(if (= i 0)

(report-prime

(time-difference

(current-time time-process)

start-time))

(start-prime-test (- i 1) n start-time))))

Now I get the following output:

1009 *** 0s 460000ns

1013 *** 0s 470000ns

1019 *** 0s 320000ns

(# # #)

```
```10007 *** 0s 1250000ns

10009 *** 0s 1410000ns

10037 *** 0s 1090000ns

(# # #)

100003 *** 0s 5940000ns

100019 *** 0s 3910000ns

100043 *** 0s 4060000ns

(# # #)

`1000003 *** 0s 7500000ns`

1000033 *** 0s 1870000ns

1000037 *** 0s 2340000ns

(# # #)

Alright! We have some numbers. Unfortunately, when I run the tests again, I get totally different numbers. Grr! I’m going to consider these numbers representative of many runs, but they probably aren’t. Let’s start with some analysis.

Let’s start by averaging the times for each group. We can multiply this by √10 to obtain an expected time for the following group, which we can compare against the measured time.

Group | Average Time | Expected time (based on lower group) | Difference |
---|---|---|---|

~1000 | 416667 | — | — |

~10000 | 1250000 | 1317616 | -5% |

~100000 | 4636667 | 3952847 | 17% |

~1000000 | 3903333 | 14662427 | -73% |

With a difference of over 70% on the final group, this data clearly does not support the √*n* hypothesis, but it doesn’t disprove it, either. For a definitive answer, we need a larger sample group. We need data for more values of *n* and for several runs. I’m not going to do that here because this post is already too long, and I’m sure you have better things to do.