“While it is easy to lie with statistics, it is even easier to lie without them.”

FREDERICK MOSTELLER

Before I get into any in depth analysis of the figures I figured I needed to say a few things about numbers and, more importantly, statistics^{1}. Numbers can be entirely counterintuitive in surprising ways. For example, most of us can’t understand how an infinity can be larger than another infinity but Georg Cantor proved that this is the case with different sets of transfinite numbers^{2}.

**Innumeracy**

We are generally terrible at reasoning with numbers–especially big numbers. Mathematician John Allan Paulos embraced the term *innumeracy* with regard to mathematics as an analog to how we use *illiteracy* to define the lack of ability to read or write and describes the phenomenon in his book, Innumeracy: Mathematical Illiteracy and Its Consequences. His work describes mundane misunderstandings of numbers that seems to be systemic in contemporary culture. As the Amazon.com description above states:

Why do even well-educated people understand so little about mathematics? And what are the costs of our innumeracy? John Allen Paulos, in his celebrated bestseller first published in 1988, argues that our inability to deal rationally with very large numbers and the probabilities associated with them results in misinformed governmental policies, confused personal decisions, and an increased susceptibility to pseudoscience of all kinds. Innumeracy lets us know what we’re missing, and how we can do something about it.

Since we’re using statistics to describe the *Aging of Arts Audiences* and the *Declining of Arts Audiences* it might be good to keep claims about probabilities associated with them in perspective so that these don’t result in “misinformed arts policies,” “confused personal decisions” (e.g. fallacious usage of the data), or “increasing susceptibility to pseudoscience” (e.g. the Classical Music is Dying Hypothesis).

**Foxes and Hedgehogs**

We are generally terrible at making predictions–especially long term predictions. Psychologist, Philip Tetlock, did a longitudinal study of predictions made by experts in many fields, including politics and economics. Over a period of 20 years, 284 experts made 28,000 predictions (both in their field and outside of it) about the future. The results which are summarized in his book, Expert Political Judgment: How Good Is It? How Can We Know?, were that experts were little better than chance at predicting the future and generally worse than computer based algorithms. Another finding was that there was a difference between those who were more correct in their predictions as opposed to those who were the least correct in their predictions. The Foxes and Hedgehogs idea first discussed by Isaiah Berlin became the perfect metaphor to describe these different experts:

Tetlock first discusses arguments about whether the world is too complex for people to find the tools to understand political phenomena, let alone predict the future. He evaluates predictions from experts in different fields, comparing them to predictions by well-informed laity or those based on simple extrapolation from current trends. He goes on to analyze which styles of thinking are more successful in forecasting. Classifying thinking styles using Isaiah Berlin’s prototypes of the fox and the hedgehog, Tetlock contends that the fox–the thinker who knows many little things, draws from an eclectic array of traditions, and is better able to improvise in response to changing events–is more successful in predicting the future than the hedgehog, who knows one big thing, toils devotedly within one tradition, and imposes formulaic solutions on ill-defined problems. He notes a perversely inverse relationship between the best scientific indicators of good judgement and the qualities that the media most prizes in pundits–the single-minded determination required to prevail in ideological combat.

To put it in the perspective of the Arts Audience debate we have two different viewpoints (the “one big thing”) that is used to inform the interpretation of data and statistics in the formulation of predictions. The two are sometimes facetiously referred to as the “Chicken Little Think Tank” camp and the “Pollyanna” camp. The former obviously believes that the field is dying a slow death (or some variation, therof) while the latter are in denial of any decline (of audiences, of sustainability of Orchestras, etc.). The predictions and pronouncements members of these two camps (Hedgehogs), given Tetlock’s research, are likely to be the incorrect in most of their predictions in the field. It’s the more cautious Foxes who are more likely to be correct in their predictions given that they tend to “know many little things, draw from an eclectic array of traditions, and is better able to improvise in response to changing events.”

Sadly, as the quote from the Princeton Press website for the book states: “He notes a perversely inverse relationship between the best scientific indicators of good judgement and the qualities that the media most prizes in pundits–the single-minded determination required to prevail in ideological combat.” Being cautious, and more likely correct, just isn’t particularly interesting to mass media audiences–it is far more interesting to read and hear about the Orchestras that are failing than about the ones which are holding their own, after all.

**Median or Mean**

The arithmetic mean is what is more commonly referred to as the average. We mostly understand this concept– take three numbers (1, 3, 2); find their sum (6) then divide by the number of individual numbers (3) and we have the mean (2). The median is the number which lies exactly in the middle of a sequence of numbers such that half of the numbers falls below it and half of the numbers fall above it. Take the same sequence of numbers (1, 3, 2) then order then from smallest to largest (1, 2, 3) and the number in the middle is the median (2). In the event you have an even number of values (say 1, 2, 3, 3) then you must take the middle two values (2, 3) and then find their arithmetic mean (2+3=5; 5/2=2.5). Simple, right?

Now, take a look at this graph of median ages of Orchestra Audiences from 1937 (one of the earliest studies mentioning Orchestra Audience median age–this one for the Grand Rapids Symphony) to 1982 (the first year of the Survey of Public Participation in the Arts; i.e. SPPA)^{3}.

We see that the median age is rising, right? Let me illustrate how misleading this rising median age can be by supplying arbitrary values used to calculate the median age:

1937 (25, 26, 28, 99) – median age=27; mean age=44.5

1955 (20, 34, 36, 87) – median age=35; mean age=44.2

1964 (17, 38, 40, 80) – median age=39; mean age=44

1982 (16, 39, 41, 79) – median age=40; mean age=43.75

As you can see, the median age can be rising while the absolute total years and the mean age can actually be falling. The *rising median age of orchestra audiences* simply means the *rising median age of orchestra audiences*–it doesn’t mean the orchestra audience age is actually rising since the above example shows how the absolute sum of the audience age *or* the arithmetic mean of the audience age could actually be falling.

This is why it is important to understand the totality of the values used to calculate the median age. This is usually done by understanding the makeup of the age cohorts which figure into the median age value. I mentioned some of the other issues with the particular values used here in my previous post about Audience Ages so won’t go into details about them until I blog about the studies individually. I will mention that in comparing the median ages of audiences to the median age of the population as a whole it is important to understand that some of the studies which do refer to median ages, very often the cut-off will exclude the youngest audience members^{4}. Census data used to report the median age of the population obviously don’t have this issue.

**Study Design, Methodology, and Publication Bias**

As I mention in my note 1) below, I do maintain a blog dealing more with issues relating to scientific, mathematical, and psychological reasoning research as it pertains to differences in populations. I bring up such issues as Study Design, Methodology, and the quickly rising field of Publication Bias. While the latter hasn’t been extended to the field of cultural economics much (yet), it has been particularly fruitful in demonstrating how actually published studies can constitute sample bias in that an estimated 60% of clinical and medical studies^{5} do not get published for various reasons which can skew diagnoses and prognoses in predictable ways.

I won’t go into details here, but when I do blog about the individual studies I will mention these issues as they pertain specifically to the study under consideration and how the study relates to the whole body of other published (as well as unpublished) works dealing with Audience demographics. That publication bias still plagues a field such as medical research, and given that the field of cultural economics does not require this, and given that we understand that predictions by economic experts tends to be rather poor, it shouldn’t be surprising that this issue of Aging audiences is rather poorly understood.

**______________________________
NOTES**

1) I wrestled with the idea of posting this post in my “Comparative Neurocognition” blog where I do often blog about cognitive biases and fallacious reasoning, but decided in the end to post this here.

2) “Larger” in reference to Cantor’s transfinite numbers means the cardinality of the set under consideration. The set of all Counting numbers {1, 2, 3, 4, …} has the same cardinality of the set of Positive Even numbers {2, 4, 6, 8, …}. Both are countably infinite sets and elements of each set can be put into a one-to-one correspondence with each other. One way to describe what may seem like an intuitive notion that the set of counting numbers is “larger” than the set of Positive Even integers (since the latter set of numbers is “half” the former set, right?) is by comparing the *density* of the sets. The density of the counting numbers is 1 and the density of the Positive Even integers is 1/2, so the first set is twice as dense as the second set.

Both of these sets are subsets of the set of all Real Numbers which is a value that represents a quantity on a continuum (e.g. continuous line). The set of Real Numbers is an uncountable set, so has a cardinality that is larger than the cardinality of either of the former subsets–in other words, there is no way to put into a one-to-one correspondence the elements of a countably infinite set and the elements of uncountably infinite set. Also, many subsets of the Real Numbers (e.g. the set of all Real Numbers between 1 and 2) are uncountably large so are larger than the countably infinite subsets.

3) Here I’m being charitable in using the Grand Rapids median age (27) rather than the Los Angeles median age (33). Note that the 1955 Minnesota study states that 54% of its audience is below the age of 35. The Twentieth Century Fund audience survey which supplies the median for Major Symphony Orchestras is 39 (as show in appendix table IV-F, pg. 456 of Baumol and Bowen’s “Performing Arts – The Economic Dilemma” ) rather than the median age of 38 that Greg Sandow consistently states in all of his “Age of Audience” related blogposts.

4) For example, in Crosby and Withers’, “The Economics of the Performing Arts,” the Morgan Gallup Poll (Computer Report No. 410 1976) used to give cohort percentages for audience composition in Australia only includes in its sample population 14 years and older, which will obviously skew age values toward the older range somewhat.

5) This figure comes from the All Results Journal which states that:

At present,

more than 60% of the experiments fail to produce results or expected discoveries. This high percentage of “failed “ research generates high level knowledge. But generally, all these negative experiments have not been published anywhere as they have been considered useless for our research target.

Some prestigious medical journals have already implemented procedures to eliminate publication bias by requiring organizations register their studies before the studies have been conducted, but there is still a high level of publication bias since this pre-registration isn’t mandatory for the whole field.

_________________________

For more posts in this series, visit the Aging of Orchestra Audiences page.

“increasing susceptibility to pseudoscience” (e.g. the Classical Music is Dying Hypothesis).Some people just are in love with the idea of Doomsday, of many kinds. Rapture, the 2012 Mayan thing, Y2K … There are people who just get off on the idea of the whole world going ass over teakettle just in time for them to enjoy the show. It’s really strange. There’s got to be a set of personality markers for people who love to say The End Is Nigh.

Not sure that mathematical education is the solution unfortunately, because these people really, really seem to

loveit. Any education that may keep them from thinking like this is likely to be strongly resisted.Off to ponder some more …

Very true–very true. That’s why I mentioned the Tetlock study since it addresses the issue of having a singular worldview with regards to making predictions. The “end of the world” camp are definitely Hedgehogs and level of education isn’t enough to overcome that kind of bias since his study shows that highly trained experts are just prone to those kinds of judgements.

It IS a singular worldview thing, isn’t it? Whether you’re talking about bunker-dwellers who stockpile ammo and Dinty Moore against the day when the gummit lands the black helicopters on their roof, hippies in love with the Age of Aquarius, or computer nerds who waited out Y2K. They might imagine either a positive or a negative End of the World, but they all have in common the fact that their brains are a one-crop agriculture.

Right! Both the Pollyanna and Chicken Little camps are working through singular worldviews (or worldviews that are singular enough for the intents and purposes of making “informed predictions”) and we’re just now beginning to learn how that can profoundly affect the ability to reason and think through issues. Tetlock’s study was a revelation in a way because he demonstrated how the political orientation didn’t matter for the purposes of being a hedgehog (or fox)–what mattered more was a particular way of thinking, and that crossed political boundaries. Basically, the world is complex, and it’s far more easy to use a simplistic set of heuristics to sort our way in it than to suspend judgement until we have much more information (which may or may never come to us).

[…] aging of arts audiences issue is derived from the same dataset this criticism is a non-issue. In a previous post in this series looking at Aging Orchestra Audiences I listed a variety of ways that we tend to misinterpret data, or how data and studies interpreting […]

[…] and/or boards that don’t overemphasize financial crises–but as I’ve mentioned in my piece about biases such mundane claims just don’t make good copy and neither do they do much to help raise funds […]

[…] related head injuries through proprietary research which it has actively promoted. The latter, as I’ve talked about in past posts, falls under publication […]

[…] the “Sky is Falling” proclamations by pundits and experts, which, as we know tend to be terrible at prediction […]

[…] This makes Ross’ comment, “There are real problems in the classical world, but the lack of a sense of history and perspective can be exasperating” and conclusion that “all stories about this non-topic — including those protesting that classical music isn’t dead after all, as well as those protesting that the entire discussion is a waste of time — are a waste of time.” While on the other hand, Sandow would say this simply means that since we don’t know then we can’t say there isn’t a crisis. If lack a sense of history and perspective, then we can’t really say that there is a crisis either, right? This just takes us back to the problem of how terrible we are at dealing with simple binaries and predictive reasoning. […]

[…] to see how useful they have actually been relative to a control group. We already know that “experts” in their fields tend to be the worst at making predictions about the field because of Philip Tetlock’s studies which show that Hedgehogs and their one […]

[…] dealt with this and other bias issues a couple of times in past blogposts–once nearly three years ago–and don’t really […]

[…] I said in a previous post, we are generally terrible at reasoning with numbers–especially big numbers. This post deals more with the collection of the numbers inflects the […]

[…] discussed Sampling Bias, Publication Bias, and other reasoning errors in “Classical Music Crisis” discussions at this blog, and […]