Here is the CMT Uptime check phrase

The Tough Decision to Remove Political Knowledge from the CSES Module 5
By Elisabeth Gidengil and Elizabeth Zechmeister

Political information questions will be absent from the CSES core module for the first time with the 5th installment of the CSES module. The CSES Planning Committee’s Political Knowledge Subcommittee[1] reached this decision despite shared agreement that political knowledge is a venerated workhorse in the field of voter choice. Differences exist among those high and low in political knowledge in numerous domains, such as economic voting behavior and the use of heuristic aids in voting decisions (though exceptions exist). Given the significance of this concept to scholars of political behavior, voting, and elections, we have some explaining to do.

Evaluation of Past CSES Political Knowledge Batteries

The first task of the Political Knowledge Subcommittee was to evaluate the effectiveness of past political knowledge modules as comparative indicators of political sophistication in the CSES project. We first considered the degree to which previous modules had resulted in sufficient variation in scores within countries to allow for meaningful analysis. Delli Carpini and Keeter (1993) recommend that the level of difficulty vary between 30% and 70% correct answers on the items to be included in a political knowledge index in order to achieve sufficient differentiation.

The first three CSES modules sought to achieve adequate variation by instructing local investigators to select one question that two thirds would answer correctly, one question that half would answer correctly and one question that only one third would answer correctly. This approach was deemed a failure (Elff 2009). In module 2, for example, only seven countries achieved the desired distribution of correct answers.

Consequently, the fourth CSES module opted for a different approach: four common multiple-choice questions (the name of the finance minister, the party or group of parties that came in second in the election, the unemployment rate, and the name of the UN Secretary-General). The question about the unemployment rate proved to be especially challenging as indicated by a very low proportion of correct responses in some countries. Generally speaking, having a common battery of political knowledge does not appear to have performed any better than the approach used in the first three modules, as the distribution of percentage correct answers shows (see Table 1). Only Iceland displays the distribution recommended by Delli Carpini and Keeter (1993). The distributions for Mexico and Thailand are especially problematic. Only 19.1 percent of Mexican respondents answered the easiest question (as indicated by the percentage correct) correctly while a mere 4.3 percent answered the most difficult question correctly. The comparable figures for Thailand were 31.0 percent and 0.5 percent, respectively. The median respondent in the United States answered only a single question correctly.

Mean centering can help to mitigate the problem of wide variation between countries in the percentage of correct answers but this will obviously do nothing to address the problem of a lack of variation within some countries. Similarly, standardizing the distributions to zero mean and unit standard deviation cannot even out skewed distributions. A final alternative is to dichotomize at the median value, especially if political knowledge is being used as a moderating variable.

Table 1: Distribution of % Correct Answers[2]

2nd party Finance UN Secretary Unemployment
Australia 61.4 55.7 50.6 64.1
Austria 90.1 78.1 58.3 50.3
France 63.3 52.9 22.4 42.5
Germany 94.2 87.9 42.3 34.7
Greece 88.4 68.6 50.0 35.7
Iceland 72.3 61.6 38.9 31.9
Japan 59.2 58.2 41.0 38.8
Mexico 19.1   5.9   4.3   9.3
Montenegro 74.6 46.3 47.7 11.6
New Zealand 86.8 85.3 47.2 33.2
Poland 84.1 47.0   7.8 35.9
Serbia 56.6 43.7 56.5 16.4
Switzerland 48.4 53.8 61.4 56.5
Taiwan 87.1 34.8 18.7 33.8
Thailand   4.1 31.0 10.8   0.5
USA 41.7 27.8 11.3 44.5

One contributing factor to the wide variation in the percentage of correct answers across countries may be differences in the propensity to guess rather than respond don’t know. Thai and Mexican respondents, for example, are much more likely to say “don’t know” than give an incorrect answer and so are Australian and Polish respondents (see full subcommittee report). The opposite is true of Austrian, German, Greek and Swiss respondents. Differential propensity to guess may reflect variation in instructions across the surveys, some of which explicitly encouraged don’t knows. Yet, Mexico had a low guessing rate, even though don’t knows were not encouraged. Other aspects of the question format and mode could explain cross-national variation, but so as well could cultural factors (Mondak and Canache 2004).

A second criterion for evaluating a battery of political knowledge questions is how well they scale. As Table 3 shows, in considering the Module 4 political knowledge battery for each country, Cronbach’s Alpha was lower than the ideal minimum of 0.70 in all but one case (Australia) and lower than .50 in nine of the 16 countries. Cronbach’s Alpha measures the internal consistency of a set of items i.e. how well do the items correlate with one another. Arguably, Loevinger’s H is a more appropriate way of evaluating the scalability of the module 4 political knowledge questions. Loevinger’s H summarizes how well a cumulative structure fits the data. A strong scale (greater than 0.50) was only achieved in two countries and in seven countries, the value of Loevinger’s H fell below the .30 cut off for a weak scale, with a value as low as .15 for Mexico.

Table 2: Scalability of the Political Knowledge Questions

Cronbach’s Alpha Loevinger’s H
Australia .70 .44
Austria .42 .28
France .60 .42
Germany .36 .29
Greece .51 .40
Iceland .44 .27
Japan .48 .24
Mexico .29 .15
Montenegro .44 .31
New Zealand .55 .48
Poland .55 .56
Serbia .48 .28
Switzerland .49 .22
Taiwan .54 .42
Thailand .44 .60
USA .57 .39

Considering Alternative Approaches to Measuring Political Knowledge

Having concluded that previous attempts to develop a political knowledge battery for the cross-national CSES project had significant short-comings, the Subcommittee considered other possible approaches to measuring political knowledge. First, we ruled out interviewer ratings (see Zaller 1985) because the validity and reliability of the ratings would be a function of the training and experience of the interviewers (Delli Carpini and Keeter 1993), which may vary across countries, and this approach could only be used in face-to-face and telephone interviews.

Second, we considered the use of a count of don’t know responses on political opinion questions as an alternative way of measuring political knowledge. Based on some additional analyses using the AmericasBarometer surveys as well as CSES questions (leader ratings, party ratings, left-right party placements, left-right self-placement), we concluded that “opinionation” cannot substitute for factual political knowledge questions. Rather, the factual knowledge questions appear to perform better than counts of don’t knows as measures of political knowledge.

Third, we considered the value of left-right party placements for providing proxy measurements of political knowledge and sophistication. The CSES asks respondents to place up to nine parties on a left-right scale (or on an alternative scale). The responses can be used to measure political sophistication. This approach has some limitations, which we detailed in our subcommittee report. The further a party is to the extreme, the older the party, and/or the larger the party’s vote/seat share, the easier it may be for respondents to place the party ‘correctly’. Moreover, the two-party system makes it impossible to derive a comparable scale for the USA. Yet, despite these drawbacks, we were able to construct measures using left-right party placements that outperformed the Module 4 political knowledge battery (see the full subcommittee report for more information).

Fourth, we considered the value of a measure of political interest rather than political knowledge. Political interest is not a perfect substitute for political knowledge (see Boudreau and Lupia 2011): someone can be interested in politics without being knowledgeable and they can be knowledgeable without being particularly interested. Yet, political interest is a strong predictor of political knowledge and it only requires a single question in place of the four-item political knowledge battery in Module 4.

Conclusion

The Political Knowledge Subcommittee’s review and analyses of political information batteries used in past CSES modules revealed that neither a bottom-up approach (in which the local investigators generated questions) or a top-down approach (in which common questions were used) yielded a satisfactory result. We found instead that, despite their own short-comings, left-right party placement questions can be used to create instruments that out-perform direct political knowledge questions. Furthermore, political interest – combined with education, which is standard in the CSES surveys – can provide additional ways to distinguish between individuals who are tuned in, and those who are tuned out, to the details of politics. The subcommittee’s final recommendation, then, presented and approved in the plenary session on August 31, 2016, was to remove political knowledge items from the 5th CSES module, leaving space for a political interest question and the remainder of the planning committee’s recommendations.

Elisabeth Gidengil is Hiram Mills Professor of Political Science at McGill University. Elizabeth Zechmeister is a Professor in the Department of Political Science at Vanderbilt University and Director of the Latin American Public Opinion Project (LAPOP). Both are members of the CSES Module 5 Planning Committee.

[1] The members are as follows: Elisabeth Gidengil (Chair), Rachel Meneguello, Carlos Shenga, and Elizabeth Zechmeister. The full subcommittee report is available here: Political Knowledge Sub-Committee Report.

[2] Ireland was omitted because there are data only for the first political knowledge question. The percentages have been calculated after excluding missing cases, which include cases coded as volunteered refusal. The demographic weight has been used for all analyses.