Is Moralism an Evolutionary Maladaption?

Johnnie Moore has been reading Brad Blanton’s Radical Honesty, and applying some of Blanton’s ideas about moralizing to branding, marketing, and improvisation. As usual for Johnnie’s posts, it is well worth a read.

What caught my attention was a quote from Blanton on what he calls the Disease of Moralism:

The passing on of learning from one generation to the next is not a bad design, and as an evolutionary development it seems to have triumphed… The ability to act based on accumulated information, and to pass great quantities of new information on, is the primary survival characteristic of the strongest animal on earth.

But, paradoxically, our survival mechanism has proved to be ultimately suicidal.

So is moralism an evolutionary maladaption? It is an interesting hypothesis, and for all I know, could very well be true, but it seems suspiciously oversimplified to me. [I could also very likely be misrepresenting Blanton’s ideas, as I haven’t actually read the quote in context, so take this all with a grain… no, a bag of salt.]

Given how pervasive moralizing is in western culture, I would guess that it probably has some evolutionary advantage, much like what scientists are uncovering with altruism. In fact, I’d guess that moralizing is an enormously beneficial tool; so beneficial that it is often misapplied in situations where it doesn’t work, leading some observers, like Blanton, to believe it is always harmful.

So how could we test the hypothesis of moralism as a maladaption? There’s something to think about.

Solution to SICP Exercise 1.22

Structure and Interpretation of Computer Programs

Solution to Exercise 1.22:

DrScheme does not have a built-in runtime procedure that I could find, so I modified the code for timed-prime-test to use SRFI 19, like so

(require (lib "19.ss" "srfi"))

(define (timed-prime-test n)
(newline)
(display n)
(start-prime-test n (current-time time-process)))

(define (start-prime-test n start-time)
(if (prime? n)
(report-prime
(time-difference
(current-time time-process)
start-time))))

(define (report-prime elapsed-time)
(display " *** ")
(display (time-second elapsed-time))
(display "s ")
(display (time-nanosecond elapsed-time))
(display "ns"))

I implemented seach-for-primes as follows:

(define (search-for-next-prime starting-at)
(if (prime? starting-at)
starting-at
(search-for-next-prime (+ starting-at 2))))

(define (search-for-primes find-n starting-at)
(if (= find-n 0)
null
(let ((next-prime (search-for-next-prime starting-at)))
(cons next-prime
(search-for-primes (- find-n 1) (+ next-prime 2))))))

I could then run some tests:

(define (time-prime-tests primes)
(map timed-prime-test primes))

(time-prime-tests (search-for-primes 3 1001))
(time-prime-tests (search-for-primes 3 10001))
(time-prime-tests (search-for-primes 3 100001))
(time-prime-tests (search-for-primes 3 1000001))

This gave the following output:


1009 *** 0s 0ns
1013 *** 0s 0ns
1019 *** 0s 0ns
(# # #)

10007 *** 0s 0ns
10009 *** 0s 0ns
10037 *** 0s 0ns
(# # #)

100003 *** 0s 0ns
100019 *** 0s 0ns
100043 *** 0s 0ns
(# # #)

1000003 *** 0s 0ns
1000033 *** 0s 0ns
1000037 *** 0s 0ns
(# # #)

As you can see, all of the tests for prime took no time at all. This is obviously wrong.

I have a feeling that all these processes complete in less time than a single tick of the process clock (which, on Windows, appears to have a resolution in milliseconds).

Let’s magnify the results by calling prime? in timed-prime-test 1000 times instead of just once. Here’s the modified definition:

(define (timed-prime-test n)
(newline)
(display n)
(start-prime-test 1000 n (current-time time-process)))
(define (start-prime-test i n start-time)
(if (prime? n)
(if (= i 0)
(report-prime
(time-difference
(current-time time-process)
start-time))
(start-prime-test (- i 1) n start-time))))

Now I get the following output:


1009 *** 0s 460000ns
1013 *** 0s 470000ns
1019 *** 0s 320000ns
(# # #)

10007 *** 0s 1250000ns
10009 *** 0s 1410000ns
10037 *** 0s 1090000ns
(# # #)

100003 *** 0s 5940000ns
100019 *** 0s 3910000ns
100043 *** 0s 4060000ns
(# # #)

1000003 *** 0s 7500000ns
1000033 *** 0s 1870000ns
1000037 *** 0s 2340000ns
(# # #)

Alright! We have some numbers. Unfortunately, when I run the tests again, I get totally different numbers. Grr! I’m going to consider these numbers representative of many runs, but they probably aren’t. Let’s start with some analysis.

Let’s start by averaging the times for each group. We can multiply this by √10 to obtain an expected time for the following group, which we can compare against the measured time.

Group Average Time Expected time (based on lower group) Difference
~1000 416667
~10000 1250000 1317616 -5%
~100000 4636667 3952847 17%
~1000000 3903333 14662427 -73%

With a difference of over 70% on the final group, this data clearly does not support the √n hypothesis, but it doesn’t disprove it, either. For a definitive answer, we need a larger sample group. We need data for more values of n and for several runs. I’m not going to do that here because this post is already too long, and I’m sure you have better things to do.

Pakistan Trains Female Fighter Pilots

Boing boing is linking to a BBC story that Pakistan is now training women to be fighter pilots.

In the human factors course that I took as an undergrad years ago, the professor consulted with the military on the several designs. He told us of the stringent anthropometric (body-shape) requirements for fighter pilots. For example, their arms and legs must be a certain length because the seats cannot be made adjustable for G-forces expected in flight. Their torsos cannot be too tall or the canopy window won’t close; nor too short or the pilot won’t be able to see over the control panel.

There is also a weight restriction. The seat ejectors are designed for a very narrow range of weights (if I remember correctly, something like 175-190lbs). If the pilot is any heavier, there is a risk the ejected seat will not clear the aircraft. If the pilot is too light, there is a risk that ejection will cause severe spinal injury and/or death.

Given that women are, on average, lighter than men, I wonder if these women cadets fit the anthropometric requirements for the aircraft that they will be piloting.

Network Theory Study of the U.S. House of Representatives

A recent paper by Porter, Mucha, Newman, and Warmbrand studies the interconnections of the committees and sub-committees in the U.S. Congress from a network theoretic perspective. They conclude that the distribution of congressmen among the committees is not random — there’s a shocker — that cliques rule over certain parts of the government. For example, there are strong ties between the Select Committee on Homeland Security and the House Rules Committee.

Regardless of the political implications of the study, I found it to be a fun example of network theory.

For those less inclined to read the paper, New Scientist has a summary.

Proving Theorem Provers Correct

In a Los Angeles Times commentary, Margaret Wertheim writes about one of the growning problems in modern mathematics:

People are now claiming proofs for two of the most famous problems in mathematics — the Riemann Hypothesis and the Poincare Conjecture — yet it is far from easy to tell whether either claim is valid. In the first case the purported proof is so long and the mathematics so obscure no one wants to spend the time checking through its hundreds of pages for fear they may be wasting their time. In the second case, a small army of experts has spent the last two years poring over the equations and still doesn’t know whether they add up.

She claims that mathematics is evolving into a postmodern study where, according to Philip Davis, emeritus professor of mathematics at Brown University, mathematics is “a multi-semiotic enterprise” prone to ambiguity and definitional drift.

What I found interesting was this bit:

The [four-colour map] problem was first stated in 1853 and over the years a number of proofs have been given, all of which turned out to be wrong. In 1976, two mathematicians programmed a computer to exhaustively examine all the possible cases, determining that each case does indeed hold. Many mathematicians, however, have refused to accept this solution because it cannot be verified by hand. In 1996, another group came up with a different (more checkable) computer-assisted proof, and in December this new proof was verified by yet another program. Still, there are skeptics who hanker after a fully human proof.

I don’t know if I buy the postmodern angle, but I believe she’s right about the practice of mathematics changing. It seems to me that as the complexity of mathematical problems grows beyond the abilities of human minds, we are going to require computers to prove our theorems for us. My prediction: the job of mathematicians will eventually turn into proving that theorem-proving software is correct.