Democracy’s Denominator
Associated Data
Abstract
Democratic responsiveness concerns the degree to which government policies match public preferences. Responsiveness studies typically use national surveys to characterize public opinion, but whether poll questions overlap with the policy agenda is unknown. The first of two empirical analyses presented here, with hundreds of issues on the national agenda in the United States from 1947 to 2000, reveals that public opinion is mostly unrelated to policy outcomes. The picture appears to be even more ominous—that is, opinion and policy are negatively related—on highly salient issues that attract media attention. A second study revisiting published work confirms that responsiveness patterns look different depending upon whether studies of opinion-policy connections (a) begin with survey data and then examine policy developments, or (b) begin with national legislative agenda issues and then examine survey data. Thus, conclusions about democratic responsiveness depend upon the issues that are examined, and often opinion surveys do not include questions about tangible public policy options. In that sense, future changes in democratic responsiveness might go undetected because scholars often lack data on what goes into the denominator of democracy.
Democracies are often judged by the degree to which leaders respond to public preferences. Scholars have shown that representation of public opinion varies across issues (Miller and Stokes 1963; Wlezien 1995, 1996; Monroe 1998) and salient topics (Page and Shapiro 1983). Opinion-policy responsiveness also takes different forms across the American states (Erikson, Wright, and McIver 1993; Lax and Phillips 2012; Pacheco 2013), institutions (Kuklinski 1978; Erikson, MacKuen, and Stimson 2002), and cross-nationally (Brooks and Manza 2007; Soroka and Wlezien 2010). However, and quite provocatively, recent studies indicate that some opinions matter more than others, and longitudinal trends appear to point to a decline in responsiveness. That is, public policy tends to favor business leaders (Jacobs and Page 2005) or affluent individuals (Gilens 2005; Bartels 2008; cf. Soroka and Wlezien 2010) over the mass public (see also Enns and Wlezien [2011]). Likewise, levels of responsiveness may have been declining in recent eras (Monroe 1998; Jacobs and Shapiro 2000).
While it is hard to overstate the normative importance of studies like these, virtually all of them share the same liability: they depend upon survey data to characterize public policy preferences. That is, many researchers collect available public opinion data and then look to policy outcomes rather than first starting with a comprehensive set of national issues. So, while theorists stress the importance of representation (e.g., Dahl 1956; Pitkin 1967) and studies like those mentioned above provide an empirical scorecard, the score nearly always depends on data availability—whether opinion data exist and on what topics. With few exceptions (see Burstein [2014]), this limitation has not been the subject of sustained scholarly inquiry. Instead, responsiveness studies have been criticized for causal ambiguity (Page 1994) or for a bias toward the status quo (Gilens 2005), but a subtler and potentially more insidious problem is that traditional measures of democratic responsiveness obscure the degree to which poll questions match the national issue agenda. Stated another way, survey research is the primary tool used to measure public opinion (e.g., Herbst 1993; Glynn et al. 2004; Asher 2011). So, if the relationship between opinion and policy is important, then we should consider how polling patterns affect perceptions of democratic responsiveness.
In contrast to previous studies of opinion-policy responsiveness, this study begins with a measure of the national policy agenda and then incorporates survey data to the extent that it exists. Statistical analyses of hundreds of issues in the United States reveal a less optimistic conclusion about democracy in action than what has typically been found in the past. On issues from the national agenda that were captured in polls, opinion appears to be unrelated to policy or perhaps even negatively related—especially on salient issues—once efforts are undertaken to assess the entire agenda. All of this is to say that measurement decisions affect conclusions about opinion-policy linkages.
Measuring Democratic Responsiveness
Scholars often measure opinion-policy responsiveness in one of three ways. One approach is dyadic, where the behavior of elected representatives varies with public opinion (e.g., Miller and Stokes 1963; Bartels 1991; Hill and Hurley 1999; Karol 2007). A second tactic assesses whether changes in the public preferences are associated with changes in public policy (Page and Shapiro 1983) or the degree to which changes in global measures of public mood play out across various governmental institutions (Erikson, MacKuen, and Stimson 2002). A third and related variant considers the relationship between majority opinion and policy at any given point in time (e.g., Monroe 1979, 1998). Viewed statically (i.e., not over time), policy tends to correspond with majority opinion most of the time, especially on salient issues (see Kuklinski and Segura [1995]; Manza and Cook [2002a, 2002b]; Burstein [2003]; or Shapiro [2011] for reviews). All three approaches have virtues, but the last two address “the responsiveness of the political system as a whole” (Page and Shapiro 1983, 176). 1
One potentially unsettling finding is that responsiveness seems to have declined during the late twentieth century. For example, two studies by Monroe (1979; 1998) show that government policies between 1980 and 1991 were less consistent with the preferences of a majority of the American public than during the 1960–79 period; consistency declined from 63 percent in the first period to 55 percent in the second. Similarly, Jacobs and Shapiro (1997) found that congruency of opinion and policy change on welfare, crime, Social Security, and health fell from 67 percent during the Reagan Administration (1984–87) to 40 percent during the Bush administration (1988–91), before bottoming at 37 percent during the early years of the Clinton administration (1992–94). Most recently, Gilens (2005, 784) found low levels of responsiveness in the 1981–2002 period; in a sample of more than 1,700 policy questions, only 35 percent of the policy changes in his study took place despite majority support for change in roughly 59 percent of the survey questions. 2
These trends are worrisome. Citizens appear to be getting less from politicians than they were decades ago. The implication is that democracy in America does not work as well as it once did, especially for the poor (Gilens 2005; Bartels 2008; cf. Soroka and Wlezien 2010). Among the causes that Jacobs and Shapiro (2000) cite is the rise of “crafted talk,” where politicians package their proposals to resonate with members of the public, so that they appear responsive without actually being so. Feeding these developments are party polarization, individualization in Congress, incumbency, interest groups, and divisive interbranch relations. Pointing to the beginning of the decline in responsiveness in the 1970s, Jacobs and Shapiro (2000, 5; see also Mooney and Lee [2000]) conclude that “the influence of public opinion on government policy is less than it has been in the past.”
However, some critics dispute this, arguing instead that pandering by politicians is widespread (Simon 2006). For example, in his review, Burstein (2003) reports that overtime comparisons within the same study “find more evidence of increase than decline” (36). Likewise, analysts who employ aggregate-level time-series approaches tend to paint rosier pictures (e.g., Erikson, MacKuen, and Stimson 2002; Wlezien 2004; Hobolt and Klemmemsen 2005; Soroka and Wlezien 2010). Thus, responsiveness may or may not be declining, but in either case it is possible that the findings could be plagued by subtle measurement biases.
CALCULATING DEMOCRATIC RESPONSIVENESS
Democratic responsiveness has been measured as the number of times public policy is (or moves in a manner) consistent with public opinion majorities. In other words, responsiveness represents instances in which policies are consistent with public preferences over the number of issues for which public opinion data on policy preferences exist. Monroe (1998) adopts this basic formulation. He looks for majority sentiment, and determines whether policy is consistent with this sentiment. If it is, then that opinion-policy pair is added to the numerator. A count of possible opinion-policy matches is in the denominator. A variant of this basic calculation appears in Page and Shapiro’s (1983) classic study. They look for sizable—six-percentage-point—changes in public opinion. The number of times policy shifted in a congruent manner is the numerator, while the possible instances of congruence is the denominator (see also Weissberg [1978]). 3
What this means is that studies of responsiveness often depend upon the availability of polling data. In other words, the reliance on public opinion surveys means scholars can perform this type of calculation only in areas where data exist. Page acknowledges as much when he states that “one fundamental type of sampling bias subtly, and almost inescapably, affects nearly all studies of opinion-policy links. We can study the impact of public opinion only to the extent that public opinion is measured” (2002, 332). Similarly, in a review of the democratic responsiveness literature, Burstein writes, “Studies of the impact of policy on opinion always begin with public opinion—that is, with issues for which public opinion data are available. But such data are available for only a small fraction of all issues, those controversial enough to warrant attention from survey organizations” (2003, 38). These concerns have surfaced in other works as well (e.g., Jones and Jenkins-Smith 2009; Shapiro 2011; Lax and Phillips 2012; Manza and Brooks 2012).
An exception is Burstein (2014), who randomly samples congressional legislative proposals in an attempt to provide what he hopes will be an unbiased assessment of democratic responsiveness. Specifically, Burstein starts with 5,977 public bills introduced in the 101st Congress during 1989–90, and then chooses 60 to study: 50 selected at random from the entire set, and another 10 selected randomly from bills reported out of committee. Burstein finds no statistical link between opinion and policy: Opinion and policy were consistent 18 times, half of the time that public opinion data existed (in 36 cases, out of 60 total). Although public opinion measures were unavailable for two dozen of the proposals, once the entire agenda was taken into consideration, Burstein’s estimate of responsiveness dropped to 31 percent (18 times out of 58; two of the original 60 were omitted due to conflicting opinion indicators).
Burstein’s approach differs from that of the majority of researchers, who often examine public policy outcomes after identifying a set of issues where public preferences have been assessed in surveys. This could lead to misleading inferences. While pollsters ask questions on a variety of topics (e.g., Stevens 2002; Shaw and Mysiewicz 2004), they are under no obligation to field a representative set of questions. Indeed, constraints (e.g., finances, time pressures, or survey length considerations) make some questions more likely to appear than others. As Burstein (2014) argues, opinion polls usually focus on the issues that the public considers important, which are “the very issues on which the public is most likely to hold elected officials accountable, and on which, therefore, democratic governments are mostly likely to do what the public wants” (45). By focusing only on issues salient among the public, public opinion polls ignore many of the proposals considered by Congress (more than 40 percent, by Burstein’s estimations). Although this could mean that current estimates of congressional responsiveness to public opinion are inflated, one potential problem is that Burstein examines just 1 percent of congressional policy proposals (60 bills, out of the nearly 6,000 bills introduced in a single congressional session). And yet, it is instructive that Burstein could locate issue preference data for only a handful of policy proposals. So, even though Burstein broke from the past by starting with policy first, the size of his sample—even if random—limits the generality of his conclusions. Beyond power concerns, alternative (perhaps even better) measures of the policy agenda may exist.
A NEW DENOMINATOR
Scholars studying democratic responsiveness are often limited to publicly available surveys. However, lawmakers might act, or fail to act, on legislative proposals not covered by poll questions. A problem of this nature plagued studies of legislative gridlock for years; accurately characterizing the amount of legislation that fails to pass as a result of factors like divided government (the numerator) depends on how one specifies the legislative agenda (the denominator). In a landmark study, David Mayhew (1991) concluded that divided government did not affect legislative output. Years later, however, Sarah Binder (1999; 2003) argued that gridlock reduced output once the scope of the national agenda was considered. 4
Denominator revisions have altered conclusions in other studies as well. For instance, recharacterizing the denominator of electoral participation—from the voting age population to eligible voters—changed perceptions of turnout patterns in America (McDonald and Popkin 2001). Voter participation is higher if calculated with the eligible voting population, as opposed to all individuals old enough to vote, which includes many ineligible individuals. Likewise, by changing the denominator to include both recorded votes and voice votes, scholars examining roll call voting in legislatures discovered that past studies of party cohesion overestimated the degree of party unity in legislatures (Carrubba et al. 2006).
A related problem looms in the background of democratic responsiveness calculations. Polling takes place on a wide variety of issues, but some topics are more popular than others. Figure 1 shows the essence of the potential problem. Public opinion on policy might only tangentially overlap with the national policy agenda, as shown with the oval in figure 1 labeled “minimal overlap.” In other instances, it might have only “some overlap” (diagonal stripes), as Burstein (2014) suspects, with pollsters only partially covering what legislators are considering. Alternatively, it could be the case that polling organizations ask about the national agenda often, so that there is “substantial overlap” of polling and the policy agenda, as shown in the oval with dotted shading.
Three Possible Scenarios Depicting How Survey Questions Might Overlap with the National Policy Agenda. The oval in light gray shading in the middle depicts the “National Policy Agenda,” which represents all issues being considered for governmental action at a given moment in time. The dark oval marked “Minimal Overlap” shows a hypothetical situation in which the polling agenda rarely corresponds to the national policy agenda. The oval with diagonal stripes shows “Some Overlap” with the policy agenda, but survey questions also cover many other issues not being considered by lawmakers. The oval with dots illustrates a scenario of “Substantial Overlap” when poll questions often cover the policy agenda.
Revisiting the denominator in this manner has the potential to influence judgments of democratic responsiveness. While pollsters ask about many aspects of American political life, the polling agenda probably does not perfectly match the national policy agenda. If government leaders spend a good deal of their time on issues that are ignored by pollsters, then it might seem like responsiveness is high or declining even though it is not; in other words, the public might get what it wants on topics in polls, but those topics might not reflect the legislative agenda. 5 Thus, responsiveness scores might be lower (or higher) if poll questions fail to correspond to the national policy agenda.
Burstein’s (2014) core intuition—that the study of democratic responsiveness may be flawed due to sampling bias—has merits; in such studies, it is important to define a set of issues being considered and then assess public opinion. Regrettably, though, Burstein’s study was small (perhaps because he went into considerable depth) and the methodology he employed has shortcomings. In particular, Burstein is conscious of sampling coverage bias, but sampling error is sensitive to both the method of sampling as well as the number of cases sampled. Larger samples are more precise (i.e., less sampling error). Few survey researchers would be comfortable drawing inferences about the population from a 60-unit sample. At the very least, the margin of error with a sample of this size would be so large as to make any conclusions extremely tentative. 6 Burstein’s sample shrinks further because public opinion data do not exist on some of the legislative proposals he sampled. Thus, it is not surprising that Burstein failed to find a significant relationship between opinion and policy in such a small sample.
Similarly, lawmakers initiated all the proposals Burstein samples. That is, he studies (or samples from) only what has been introduced, but agenda control is important (Carrubba et al. 2006). Legislative leaders determine what is, or is not, going to be considered. Therefore, by studying bills that have been introduced—rather than the broader set of issues that lawmakers could be acting on—Burstein’s sample might not be representative in the sense that it depends upon the actions of politicians, many of whom are already invoking public opinion strategically to seem responsive (Jacobs and Shapiro 2000; Cook, Barabas, and Page 2002). In that sense, Binder’s (2003) characterization of the national legislative agenda, while not perfect, casts a wider net in the sense that issues do not need to be formally advanced by legislators to be worthy of consideration. (A supplemental online appendix provides more details on the agenda measure.)
Empirical Analyses
To what degree does the polling agenda match the public policy agenda? Furthermore, how do polling patterns influence our perceptions of democratic responsiveness? To address these questions, it is helpful to use the more than 600,000 survey questions found in the Roper Center for Public Opinion Research’s iPOLL archive. The iPOLL archive shows opinion frequencies for individual questions, and indicates the organization that conducted the study, the field interview dates, the sample sizes, and other methodological details.
The analyses reported here make use of the iPOLL archive in two ways. The first study starts with Binder’s (2003) historical data on the content and passage of legislation on the national policy agenda, then attempts to locate poll questions on those topics. This will provide insight into how survey questions map onto the national policy agenda, as well as the degree to which opinion is associated with policy outcomes. A second empirical study adopts a somewhat different approach by revisiting the main analyses from Gilens (2012) to see if the conclusions change once responsiveness is recalculated for issues on (or off) the national agenda. The analyses are divided into two studies, with details on the methods appearing in each section below.
STUDY 1: DEMOCRATIC RESPONSIVENESS ON THE NATIONAL POLICY AGENDA
Patterns in policy polling could alter perceptions of democratic responsiveness, especially if survey questions do not correspond to the policy agenda. To investigate this possibility, opinion-policy connections were evaluated based upon the national issue agenda. In particular, responsiveness scores were calculated using the entire set of Binder’s (2003) data on the US policy agenda from the 80th through the 106th congressional sessions, which cover the period from 1947 through 2000. As noted earlier, Binder’s measure is based upon unsigned editorials appearing in a prominent national newspaper, the New York Times, during each two-year congressional session. 7 The issues mentioned at any given point were diverse (e.g., price controls in the postwar period of the late 1940s, statehood for Hawaii in the 1950s, desegregation in the 1960s, ratification of nuclear arms treaties in the 1980s, welfare reform in the 1990s), and the number of issues on the agenda varied too. For example, the Congress with the most issues was the 99th (1985–86) with 141 distinct issues, while the 86th (1959–60) had the least issues with 62 legislative concerns. Across the 27 sessions, the average was 104 issues per congressional session.
In the aggregate, 2,818 issues were on the national policy agenda from 1947 through 2000. To match public opinion to these issues, two coders reviewed more than 20,000 polls from the iPOLL collection to determine whether polling data existed on each issue. 8 Poll questions selected for inclusion were all from nationally representative surveys (drawn from more than 70 organizations, with most coming from well-known firms like Gallup, Princeton Survey Research Associates, Louis Harris & Associates, major news media, and academic survey researchers from the University of Michigan and the National Opinion Research Center). The principal requirements were that questions (a) correspond to the issues on the agenda and (b) capture what the public prefers on an issue. 9 Once a question was deemed relevant, a database was constructed with all available questions for every issue. Of the nearly 3,000 issues on the agenda from 1947 through 2000, 658 had at least one question containing public preferences on those same issues.
However, these 658 issues are not distributed evenly over this period. For some Congresses, as few as 6 percent of the issues on the agenda were queried in polls (4 out of 62 issues were polled in the 86th Congress), while at other times more than a third of the issues on the agenda appeared in survey questions (the 104th Congress had polling on 39 of its 107 issues). Figure 2 shows the variations in the size of the agenda as well as the proportion with poll coverage. On average, 23 percent of the issues on the agenda were queried in iPOLL’s archived questions. Thus, the polling agenda does not necessarily align with the policy agenda; indeed, over three-quarters of issues on the policy agenda are not covered in national polls. Scenarios akin to the “minimal” to only “some” overlap depicted earlier (figure 1) seem to describe reality, providing even less coverage than Burstein (2014) suspected would exist.
Size of National Policy Agenda and Proportion with Polling Data, Congressional Sessions, 1947–1948 (80th Congress) to 1999–2000 (106th Congress).
On the 658 (out of 2,818) issues from the national agenda that were the subject of at least one poll question, some issues featured dozens of questions. In fact, more than 60 percent of the items with poll coverage featured more than one poll (410 out of 658, or 62 percent), with an average of nearly 4 questions (mean = 3.7) on the issues that garnered any attention from pollsters. More of the policy agenda could have been covered with a more equitable distribution of polling (i.e., if the 2,449 questions were spread across the 2,818 issues instead of focusing on fewer than a quarter).
Notwithstanding the prevalence of polling, it could be that responsiveness calculations are unaffected by the lack of polling coverage. This would be the case if the polling coverage, even if small, was representative. Yet, that did not happen. In particular, it is possible to predict which issues receive coverage as well as the number of questions, so it is not a random sample. 10 In analyses (not shown), issue salience, as captured in Binder’s (2003) measure of the number of editorials in the New York Times, is a statistically significant (p < .01, two-tailed) and positive predictor of polling. Certain congressional sessions—especially those occurring after the 1970s—were also more likely to have received attention from pollsters. Institutional configurations, such as Democratic control of the presidency, House, or Senate, were also significant predictors of polling counts, positively so in the case of the first two. Issue dummy variables for the most numerous categories (Crime/Legal/Civil Rights, Environmental/Energy/Science, Economic/Fiscal/Tax, Foreign Policy/Trade, Defense/Terrorism/Intelligence, Government Administration/Elections/Native Americans) were mostly unassociated with polling coverage relative to omitted categories. Net of these factors, legislative success (i.e., issue passage or not) was negatively related to the prevalence of polling (p < .05, two-tailed), which helps set the stage for the next set of analyses.
In particular, Binder (2003) characterizes the national policy agenda as well as the legislative success on those same polices (i.e., congressional passage and presidential approval, or an override of presidential veto). Of the 2,818 issues on the agenda from 1947 to 2000, fewer than half were counted as legislative successes (n = 1,350 or 47.9 percent). The proportions are roughly similar for a subset of issues with polling coverage; just under half (46 percent, or 304 out of 658) were instances of policy passage. However, just because the pollsters ask about policies that have a roughly equal chance of succeeding does not mean that public preferences prevail. Figure 3 demonstrates that the public often fails to secure representation.
Democratic Responsiveness from 1947 to 2000 in US Congressional Sessions from the 80th to the 106th. The overall responsiveness series shown in the dark black line uses the entire national policy agenda for each congressional session in the calculation of opinion and policy correspondence (i.e., policy changes when a majority supports change or does not change when majorities oppose). A second measure, depicted in the gray line, shows agreement between policy and opinion for only the issues from the national policy agenda with polling data. In both cases, the national policy agenda data come from Binder (2003) and polling data come from the Roper Center for Public Opinion Research.
Figure 3 shows the proportion of democratic responsiveness on the vertical axis (defined here as instances when a majority of the public prefers change and gets it, as well as when majorities prefer no change and policy does not change). The horizontal axis shows biannual congressional sessions from the 80th in 1947–48 until the 106th in 1999–2000. Two series are depicted: one for overall responsiveness (in black) and another for responsiveness on the subset of issues with polling coverage. In either case, both are low: the overall responsiveness average is .11, with a range of 0 to .22. The responsiveness average for issues with polling coverage is .46, with a wider range of 0 to .78. The figure highlights particular congressional sessions with high responsiveness (1953–54, 1967–68, 1981–82, and 1995–96) as well as some low points for responsiveness. Both series end below where they started in 1947; however, with so much year-to-year volatility, it is hard to say definitively that responsiveness is on the decline. It is perhaps easier to say that responsiveness appears to be low, especially overall; it rises above .20 just twice.
A more comprehensive analysis of the relationship between public opinion and policy appears in table 1. The table shows coefficients for models where the dependent variable is legislative passage on an issue from the national policy agenda. The main predictors are public opinion (as measured by the percentage of the public in favor of the issue in question), issue salience (as captured in Binder’s data with the number of editorials), and the interaction of these two. The first three columns report models (one with only public opinion, one for public opinion and its interaction with salience, and a final one that includes the interaction along with several control variables) for the “original” data—that is, the 658 issues with at least one poll question. 11 The next set of three models in table 1 under “missing opinion data imputed” attempt to recover the missing public opinion predictors using the variables discussed earlier that predict polling coverage. These analyses make use of multiple imputation techniques (King et al. 2001) to generate five estimates of what public opinion was for any given topic that lacks coverage, and then combines the estimates in a single model. As it turns out, however, the estimates are similar across the columns/techniques.
Table 1.
The Relationship between Public Opinion and Public Policy from the National Legislative Agenda
| Original data | Missing data imputed | |||||
|---|---|---|---|---|---|---|
| Coeff. | Coeff. | Coeff. | Coeff. | Coeff. | Coeff. | |
| Public opinion | –.0045(.0030) | .0015(.0037) | .0063(.0033) | –.0019(.0016) | .0026 (.0021) | .0064** (.0020) |
| Issue salience | .0674**(.0205) | .0659** (.0187) | .0609** (.0125) | .0640** (.0112) | ||
| Public opinion X Salience | –.0007*(.0003) | –.0008** (.0003) | –.0006** (.0002) | –.0007** (.0002) | ||
| Control variables | None | None | Included | None | None | Included |
| Constant | .1946 (.2016) | –.3395(.2360) | –1.520** (.6384) | .0535(.0915) | –.3354** (.1169) | –1.055** (.3534) |
| Number of observations | 2,449 | 2,449 | 2,449 | 4,609 | 4,609 | 4,609 |
| Number of issues | 658 | 658 | 658 | 2,818 | 2,818 | 2,818 |
Note.—The table displays probit coefficients with clustered robust standard errors in parentheses. The dependent variable is policy output with value of 1 for instances of legislative success on an issue from the national policy agenda and 0 otherwise. (Source.—Binder 2003.) Salience is a count of the number of editorials on the issue from the New York Times (Source.—Binder 2003.) The models with “control variables” include dummy variables for congressional session, issue, survey organization, and coder identity as well as confidence. The models with “missing data imputed” employ multiple imputation techniques (e.g., King et al. 2001) prior to estimation to recover the missing public opinion responses on issues from the national legislative agenda without survey coverage.
** p < .01; * p < .05 (two-tailed)
Public opinion, on its own, is negatively related to policy passage in the first column (coeff. = –.0045), but the standard error (.0030) is too large to attain statistical significance. Adding salience and its interaction does quite a bit; salience is positive (and represents instances with high New York Times coverage but low public opinion), but the interaction of salience and opinion is negative, suggesting that strong public demand is less likely to result in policy passage when the issue is featured regularly in editorials. This is true even with control variables for congressional session, issue dummies, the survey organization conducting the poll, the research assistant researching poll coverage, and coder confidence. 12
The models in the first three columns of table 1 analyze 658 issues that received attention in public opinion polls, but sometimes iPOLL contains several poll questions for any given issue on the agenda, which means the total number of observations in the “original” data is 2,449. 13 The number of observations increases for the last three columns of table 1 because public opinion has been multiply imputed for the issues that lacked coverage in surveys (using techniques in King et al. [2001]). In these analyses, the number of issues is 2,818, and the total number of observations in the model is 4,609, due to the repeated polling on issues. Yet, the conclusions are similar. The coefficients are roughly the same size and sign, but the significance improves due to the added power. Opinion on its own seems negatively relative to policy passage, but an interaction with salience reveals that this is true of the issues that are featured often in the press (perhaps because they are issues that repeatedly fail). 14
The best way to illustrate the effects is by calculating predicted probabilities of legislative passage for various combinations of opinion and salience. In other words, do issues on the agenda pass if public opinion is supportive, especially once salience is taken into account? Table 2 shows that opinion and salience are indeed important predictors of legislative success, but in a manner opposite that which might be preferred normatively. The first row puts the average probability of passage at .45 (with a 95 percent confidence interval from .39 to .52) for issues with average public support (roughly 55 percent) and average salience (~5 editorials). 15 Six other scenarios are shown in table 2. The first set varies opinion from low to high (i.e., two standard deviations below/above the mean) to show that moving opinion dramatically in support of a bill has a modest negative effect on the probability of passage of –.07 (from .49 on issues with low support but average salience, to .42 with high levels of public support), albeit an insignificant one (SE = .09, interval from –.26 to .12). On issues with low salience (only one editorial, the lowest of the sample), the probability of passage on issues as we move from low to high opinion rises negligibly (.02) in a statistically insignificant fashion (SE = .11). The most dramatic scenario appears in the last set of predicted values for table 2. On issues with high salience (i.e., 20.8 editorials), passage is very likely when opinion is unsupportive (pr = .79, se = .08), but it drops precipitously to a less than even chance of passage (pr = .41, se = .09) for high-salience issues when the public is highly supportive. This 38 percentage point drop in the likelihood of passage is statistically significant (95 percent interval from –.62 to –.09) and counterintuitive. Public opinion appears to be unrelated to policy passage except on highly visible issues, in which case it is negatively related.
Table 2.
Predicted Probability of Legislative Success in Various Opinion and Salience Scenarios
| Probability of passage (S.E.) | 95% Conf. interval | ||
|---|---|---|---|
| Low | High | ||
| Average public opinion support, average salience | .45 (.03) | .39 | .52 |
| Low public opinion support, average salience | .49 (.06) | .37 | .61 |
| High public opinion support, average salience | .42 (.05) | .32 | .52 |
| Difference (low to high) | –.07 (.09) | –.26 | .12 |
| Low public opinion support, low salience | .40 (.07) | .27 | .54 |
| High public opinion support, low salience | .42 (.06) | .32 | .54 |
| Difference (low to high) | .02 (.11) | –.19 | .23 |
| Low public opinion support, high salience | .79 (.08) | .61 | .92 |
| High public opinion support, high salience | .41 (.09) | .24 | .59 |
| Difference (low to high) | –.38 (.13) | –.62 | –.09 |
Note.—The table shows predicted probabilities via simulation (King, Tomz, and Whittenberg 2000) based upon the model estimates in column 2 of table 1 for the specification using all available instances of agenda items with poll coverage (n = 2,449). Average public opinion support is 54.6 percent in favor of the issue, with low and high points set two standard deviations below and above the mean (low = 16.3, high = 93.1). Average salience is 5.2 based upon the number of editorials in the New York Times for valid cases, while low is a value of 1 and high is two standard deviations above the mean at 20.8.
It is possible to analyze data differently, such as whether a majority of the public prefers change, and policy then changes, as well as instances when the public prefers no change and policy remains the same (i.e., akin to Monroe [1998] or Burstein [2014]). Those analyses are similar in that responsiveness looks different when evaluated relative to the policy agenda. Opinion is unrelated or even negatively related to legislative action overall, although the picture is more encouraging for the subset of issues that are the subject of repeated editorials. Also, while most of the policy agenda lacks polling coverage, hundreds of issues across more than 50 years do have polling coverage. The picture becomes even clearer, from a statistical standpoint, with attempts to recover what the public may have preferred on more than three-quarters of the agenda uncovered by polls. 16 In either case, the normative conclusion is not especially encouraging. Democratic responsiveness in the late twentieth century is low, with opinion mostly unrelated to policy, or negatively related to policy on issues that attract media attention.
STUDY 2: REANALYSIS OF GILENS (2012)
The analyses reported above begin with identifying the policy agenda before examining polling coverage. Yet, it is possible to revisit a past work on responsiveness to learn whether the findings differ if they use cases on or off the national agenda. To accomplish this, it was necessary to find a published opinion-policy responsiveness study that (a) dealt with specific policy issues, (b) covered a fairly long time period in sufficient depth, and (c) made the data publicly available for reanalysis. One prominent work that meets all three criteria is that of Gilens (2012; see also Gilens [2005]), who studied the relationship between public opinion and public policy over many of the same years as studied above. While his criterion for data inclusion was more limited (e.g., he began by identifying survey questions that used the word “oppose” [Gilens 2005, 782], whereas the analyses above used many different styles of policy preference questions), his study is an example of starting with polling first (as opposed to the agenda). Thus, it is possible to look within his analyses for issues that happen to be on the policy agenda versus those that were not. 17
The Gilens (2012) study is complex, diving into numerous aspects of responsiveness (e.g., which income groups get represented, policy topic variations, interest group influence, over-time patterns, electoral concerns). While it not possible to revisit every analysis here, his core analysis in the first empirical chapter relates public opinion to policy responsiveness. The first column of table 3 reproduces the results shown in Gilens (2012, 76, table 3.1, column 1). In particular, his methodology uses a public opinion logit coefficient value to predict policy change on issues from 1981 to 2002. The entries in the lower rows of column 1 of table 3 replicate the other results reported in Gilens that show the predicted probability of policy responsiveness if 20 percent favored the policy (which is estimated at .19) versus predictions if 80 percent favored the policy (estimated to be .43). This 24-percentage-point gain (i.e., .19 to .43) is then recast as a ratio of 2.2 based upon high opinion over low (i.e., .43/.19 = 2.2).
Table 3.
Replication of Gilens (2012) and Reanalysis for Issues On/Off the National Policy Agenda
| Gilens (2012) replication | Interaction for on/off agenda | Gilens (2012) opinion data added to national policy agenda | |||
|---|---|---|---|---|---|
| Original coeff. | Subset 1981–2000 | Subset 1981–2000 | Gilens DV | Binder DV | |
| Subset 1981–2000 | Subset 1981–2000 | ||||
| coeff. | coeff. | coeff. | coeff. | ||
| Public opinion logit coefficient (standard error) | .41** (.05) | .31** (.06) | .29** (.09) | .51* (.22) | .00 (.21) |
| On national policy agenda (standard error) | .05 (.12) | ||||
| Public opinion X On agenda (standard error) | .02 (.12) | ||||
| Intercept | –.85 | -.86 | –.88 | –.71 | –.39 |
| Probability if 20% favor | .19 | .22 | .22 | .20 | .40 |
| Probability if 80% favor | .43 | .39 | .39 | .50 | .40 |
| Relative ratio (80%/20%) | 2.21 | 1.82 | 1.77 | 2.56 | 1.00 |
| N | 1,779 | 1,520 | 1,520 | 181 | 181 |
| –2*Log-likelihood | 2,198 | 1,877 | 1,876 | 233 | 244 |
| Likelihood ratio X 2 | 60.16 | 28.28 | 28.59 | 5.71 | 0.00 |
| Model significance | p < .01 | p < .01 | p < .01 | p < .05 | n.s. |
Note.— The dependent variable is policy outcome coded 1 if the proposed policy change took place within four years of the survey date and 0 if it did not for the first four columns. The last column uses Binder’s (2003) dichotomous indicator of whether policy change took place for the policy agenda item. Models in columns beyond the first analyze a subset of Gilens for the years of 1981 to 2000 that overlap with Binder’s (2003) data on the national policy agenda. Consistent with the methods Gilens (2012) employs, the first row of each model uses predictor variables that are the logits of the percentage of respondents favoring the proposed policy change. ** p < .01; * p < .05 (two-tailed).
Other columns of table 3 show similar analyses for a subset of Gilens’s data. His analysis (Gilens 2012, chapter 3) spanned 1981 to 2002. The models in the columns beyond the first omit the last two years (i.e., they stop at 2000) because Binder’s agenda data do not extend that far. Setting aside these two years, however, does not change the substantive story. In column 2 of table 3, public opinion is related to policy change overall (coeff. = .31, p < .01); in analyses (not shown), I also confirm the stronger connection for the 90th income percentile than for members at the bottom 10th percentile of income. The remaining columns of table 3 include a dummy variable for whether the issue was on the national policy agenda (810 of the 1,520 issues) as well as an interaction between opinion and being on the policy agenda. The patterns show that Gilens’s analyses are unaffected by whether the issue was on the agenda (i.e., scoring each observation as on/off the agenda based upon Binder’s agenda data). Specifically, the interaction term representing issues on the agenda is positive but statistically insignificant.
While the overall positive relationship between opinion and policy might seem inconsistent with the patterns reported earlier, two points are worth remembering. First, Gilens (2012) began with a search for polling data and then moved to policy, while the analyses reported in table 1 started with the policy agenda and then moved to the available opinion data. In other words, roughly half of the issues were on the agenda (53 percent of the 1,520 issues in the 1981–2000 subset), but this means that hundreds of legislative issues were not included in Gilens’s study. Second, the policy coding procedures were subtly different. In particular, Gilens looks for policy change for a period of four years beyond the survey question, while Binder (2003) evaluated legislative success for issues “enacted into law by the end of the Congress” (38).
To address both of these limitations, and to better equate the two styles of research, a final set of analyses revisits the analyses presented in study 1, but this time for a subset in which it is possible to match Gilens’s (2012) opinion and outcome data to issues on the national policy agenda. For this smaller set of issues on the agenda with overlapping dates and opinion data (i.e., for 181 of the 1,150), Gilens’s opinion measure is a significant predictor of policy only when using his outcome measure. In other words, public opinion for the whole sample is positively and significantly related to policy (p < .05), as we see in the second-to-last column of table 3. 18
Perhaps most importantly—and this result bridges the two studies—if the outcome measure is switched to Binder’s (2003) measure, then the patterns disappear as witnessed in the last column of table 3. That is, public opinion is unrelated to policy. 19 Thus, we see confirmation of the basic intuition guiding the analyses. Perceptions of democratic responsiveness are heavily influenced by the metrics employed. Starting with opinion data and then moving to policy output tends to paint a picture of responsiveness. The landscape looks different—and less cheerful—when responsiveness is recalculated as the relationship between opinion-policy on issues from the national agenda.
Conclusions
How well does democracy work in the United States? Answers vary, but the trends seem to be moving in the wrong direction from the standpoint of democratic theory—that is, people seem less and less likely to get what they say they want from government. However, responsiveness calculations depend on the polling data used, and how the data are employed. The irony is that even though polls seem to be ubiquitous, relatively few polling questions pertain to policies on the national agenda. In other words, while empirical estimates vary, the reality is that it is difficult to characterize democratic responsiveness patterns for most issues on the national policy agenda due to a lack of survey data on policy preferences.
Survey experts periodically reevaluate survey methods (e.g., Smith 1987; Keeter et al. 2006; Marsden and Wright 2010; Keeter 2012) as well as the evidence on the opinion-policy linkages (Manza and Cook 2002a, 2002b; Burstein 2003). Distilling democracy to a single number makes overtime comparisons easier, but doing so risks glossing over some of the underlying trends. As with legislative gridlock (Binder 1999) or voter turnout (McDonald and Popkin 2001), how the denominator is calculated influences judgments of democratic vitality.
This study, of course, has limitations. Chief among them is the reliance on data from the iPOLL database. Roper is the world’s largest repository of public opinion questions, but it does not include all questions ever asked. For example, the Roper data do not incorporate politicians’ private polls (e.g., Druckman and Jacobs 2006), and other times leaders may lack data on public preferences altogether (see Herbst [1998]).
A second set of limitations concerns the use of specific policy preferences rather than something broader, such as policy mood or ideology (Stimson 1991). Aggregation smooths over gaps in polling coverage, revealing representation across time and institutions (Erikson, MacKuen, and Stimson 2002). Yet, ideology measures may not correspond well to congressional votes or particular policies. Also, sometimes the public claims ideological affiliations that contradict their policy preferences (e.g., “conflicted conservatives” in Ellis and Stimson [2012]; see also Page and Jacobs [2009]).
Finally, much rests upon the operationalization of the national policy agenda, which here is based upon editorials from a major national newspaper. Alternative policy agenda measures are certainly conceivable. Yet, irrespective of the metrics, scholars investigating opinion-policy connections should begin by considering which issues are on the national agenda before moving to policy output. Delineating the denominator is challenging but, as illustrated here, deeply important in terms of the substantive conclusions regarding democratic performance.
Supplementary Data
Supplementary data are freely available online at http://poq.oxfordjournals.org/.
Footnotes
1.According to Jacobs and Shapiro (2000), responsiveness occurs when “the public’s substantive preferences point government officials in specific policy directions” (302; see also Page and Shapiro [1983]). It is similar to representation, which according to Pitkin (1967) is “acting…in a manner responsive [to the represented]” (209–10). Representation and responsiveness occur when governmental actions reflect public opinion.
2. Page and Shapiro (1983) also report declining responsiveness over time, from 67 percent congruence during 1935–45, to 54 percent during the 1960s, before rebounding somewhat in the 1970s.
3. Gilens (2012; also 2005) adopts a variant by modeling how the intensity of public preferences for policy change (i.e., 55 versus 90 percent) are associated with actual policy change. Analyses reported later revisit his formulation.
4. Binder (2003) uses New York Times editorials to determine the policy agenda, and then uses this in the denominator of a gridlock calculation, with the number of policies that failed to pass in the numerator. Interestingly, a lagged public mood variable does not affect gridlock, which suggests that public preferences do not influence public policy output once other factors are considered. For a review of other divided government works, see Binder (2003).
5.Charting the numerator is hard, especially when pollsters ask about a policy area in general without specifying a particular piece of legislation (e.g., questions about “welfare spending” might pertain to food stamps, Temporary Assistance for Needy Families [TANF], or even the earned income tax credit). As Page and Shapiro (1983, 176) note, “some opinion items are so ambiguous that they are not easily matched with specific policies.”
6. Burstein (2014, 32, fn. 5) defends his sample as being large enough, and he points to other studies that consider only a few dozen issues (e.g., Soroka and Wlezien [2010], who consider fewer than 35 issues).
7.Although some congressional sessions spanned three calendar years (e.g., January 3, 1957, to January 3, 1959, for the 85th), they are reported here in two-year intervals because most work takes place in the first and second years.
8.The coders achieved a high Krippendorff’s alpha reliability score of .82 on a random subsample of cases. The issues were also sorted randomly to ensure that one person did not code particular time periods or issues.
9.Any question wording that could be used to characterize public preferences was included (e.g., “favoring,” “supporting,” “approving” of a policy) as long as the question was asked of the entire sample, not just those who passed a prior filter item. Also, the time frame searched for each congressional session was the entire time the Congress met (i.e., the official start and end dates) as well as the weeks before the official start of the session, dating back to the election in the prior November. The rationale for doing this was that once a new Congress is elected, the public might be queried about issues ahead of their formal work period. As it turned out, most of the questions deemed relevant were during the time period that Congress was in session. Finally, public opinion was sometimes aggregated (e.g., combining “strongly prefer” and “prefer”) and opinion was recorded as the percent in support of what was on the agenda, which in some cases meant using the agreement categories, while in other instances it was the disagreement categories.
10.These analyses are akin to “failed” randomization checks in an experiment; specifically, a negative binomial model for the count of poll questions on an issue rejects the null hypothesis of no significant predictors (model Chi-square is significant at p < .001).
11.In this set of topics with at least one poll, the frequency of majoritarian congruence (i.e., when policy follows public opinion over 50 percent or policy does not change when a major supports the status quo) is 48.2 percent (317 of 658).
12.Nearly three-quarters of the time, the coders expressed high confidence in their ability to match public opinion to the policy agenda items. However, ambiguity occasionally arose in the way the topic was described. The supplemental online appendix provides additional details.
13.The unit of analysis for these analyses is an opinion-policy pair. Since issues could have more than one poll, the models employ clustered standard errors in Stata to account for the lack of independence in some instances.
14.The cases include a diverse array of topics like the war on drugs, steel import quotas, tuition tax breaks, grazing permits, high-speed rail, domestic violence, repeal of the Glass-Steagall banking regulations, endangered species protection, online copyright infringement, flight caps at airports, forest firefighting, and bankruptcy laws.
15.The predicted probabilities in table 2 are based upon simulations (King, Tomz, and Whittenberg 2000) from the model shown in column 2 of table 1 for only cases where public opinion exists and without control variables. However, the substantive patterns are the same in simulations with models including controls or with models based upon the imputed data. In addition, controlling for decade or post-1981 to account for eras does not alter the substantive results.
16.For example, the issue of whether to abolish the Interstate Commerce Commission during the mid-1990s was not covered in public opinion polls. Using characteristics of the topic (e.g., government administration) and other factors (e.g., number of NYT editorials [n = 1], the legislative success [yes], as well as features of the era [104th Congress], and research assistant confidence in the poll question search [high]), opinion was imputed for this missing data point five times as recommended by King et al. (2001). The imputation estimates ranged from 48.2 to 62.3, with an average of 53.7 percent.
17.The data for Gilens (2012) are posted on the Russell Sage Foundation website, although code to reproduce the findings was not publicly available. In personal communication, Gilens graciously provided help on data-coding decisions, which permitted a successful replication of the main set of empirical findings (table 3.1 in Gilens [2012]).
18.Salience also plays a role, as it did earlier. Even with Gilens’s policy outcome measure, an interaction of opinion and salience produces the same patterns identified earlier (as in column 2 of table 1). Public opinion on low-salience issues (the constitutive term of the interaction) is positive and significant, but just as it was earlier, the interaction of opinion and salience is negative and statistically significant (p < .05).
19.The difference in the number of cases across table 3 stems from ambiguity related to the agenda items as well as different coding procedures. Null results like these and those reported earlier are important to combat potential biases in favor of publishing statistically significant findings (e.g., Gerber and Malhotra 2008; Gerber et al. 2010).
References
- Asher Herbert. 2011. Polling and the Public: What Every Citizen Should Know, 8th ed. Washington, DC: CQ Press. [Google Scholar]
- Bartels Larry M. 1991. “Constituency Opinion and Congressional Policy Making: The Reagan Defense Build Up.” American Political Science Review 85:429–56. [Google Scholar]
- ———. 2008. Unequal Democracy: The Political Economy of the New Gilded Age. Princeton, NJ: Princeton University Press. [Google Scholar]
- Binder Sarah A. 1999. “The Dynamics of Legislative Gridlock, 1947–1996.” American Political Science Review 93:519–33. [Google Scholar]
- ———. 2003. Stalemate: Causes and Consequences of Legislative Gridlock. Washington, DC: Brookings. [Google Scholar]
- Brehm John. 1993. The Phantom Respondents: Opinion Surveys and Political Representation. Ann Arbor: University of Michigan Press. [Google Scholar]
- Brooks Clem, Manza Jeff. 2007. Why Welfare States Persist: The Importance of Public Opinion in Democracies. Chicago: University of Chicago Press. [Google Scholar]
- Burstein Paul. 2003. “The Impact of Public Opinion on Public Policy: A Review and Agenda.” Political Research Quarterly 56:29–40. [Google Scholar]
- ———. 2014. American Public Opinion, Advocacy, and Policy in Congress: What the Public Wants and What It Gets. New York: Cambridge. [Google Scholar]
- Carrubba Clifford J., Gabel Matthew, Murrah Lacey, Clough Ryan, Montgomery Elizabeth, Schambach Rebecca. 2006. “Off the Record: Unrecorded Legislative Votes, Selection Bias, and Roll-Call Vote Analysis.” British Journal of Political Science 36:691–704. [Google Scholar]
- Cook Fay Lomax, Barabas Jason, Page Benjamin I. 2002. “Invoking Public Opinion: Policy Elites and Social Security.” Public Opinion Quarterly 66:235–64. [Google Scholar]
- Dahl Robert A. 1956. A Preface to Democratic Theory. Chicago: University of Chicago Press. [Google Scholar]
- Druckman James N., Jacobs Lawrence R. 2006. “Lumpers and Splitters: The Public Opinion Information That Politicians Collect and Use.” Public Opinion Quarterly 70:453–76. [Google Scholar]
- Ellis Christopher, Stimson James A. 2012. Ideology in America. New York: Cambridge. [Google Scholar]
- Enns Peter K., Wlezien Christopher. 2011. Who Gets Represented? New York: Russell Sage. [Google Scholar]
- Erikson Robert S., MacKuen Michael B., Stimson James A. 2002. The Macro Polity. New York: Cambridge University Press. [Google Scholar]
- Erikson Robert S., Wright Gerald C., McIver John P. 1993. Statehouse Democracy: Public Opinion and Democracy in American States. New York: Cambridge University Press. [Google Scholar]
- Gerber Alan S., Malhotra Neil. 2008. “Do Statistical Reporting Standards Affect What Is Published? Publication Bias in Two Leading Political Science Journals.” Quarterly Journal of Political Science 3:313–26. [Google Scholar]
- Gerber Alan S., Malhotra Neil, Dowling Conor M., Doherty David. 2010. “Publication Bias in Two Political Behavior Literatures.” American Politics Research 38:591–613. [Google Scholar]
- Gilens Martin. 2005. “Inequality and Democratic Responsiveness.” Public Opinion Quarterly 69:778–96. [Google Scholar]
- ———. 2012. Affluence and Influence: Economic Inequality and Political Power in America. Princeton, NJ: Princeton University Press. [Google Scholar]
- Glynn Carroll J., Herbst Susan, O’Keefe Garrett, Shapiro Robert, Lindeman Mark. 2004. Public Opinion, 2nd ed. Boulder, CO: Westview. [Google Scholar]
- Herbst Susan. 1993. Numbered Voices: How Opinion Polling Has Shaped American Politics. Chicago: University of Chicago Press. [Google Scholar]
- ———. 1998. Reading Public Opinion: How Political Actors View the Democratic Process. Chicago: University of Chicago Press. [Google Scholar]
- Hill Kim Quaile, Hurley Patricia A. 1999. “Dyadic Representation Reappraised.” American Journal of Political Science 43:109–37. [Google Scholar]
- Hobolt Sara Binzer, Klemmemsen Robert. 2005. “Responsive Government? Public Opinion and Government Policy Preferences in Britain and Denmark.” Political Studies 53:379–402. [Google Scholar]
- Jacobs Lawrence R., Page Benjamin I. 2005. “Who Influences US Foreign Policy?” American Political Science Review 99:107–23. [Google Scholar]
- Jacobs Lawrence R., Shapiro Robert Y. 1997. “The Myth of the Pandering Politician.” Public Perspective 8:3–5. [Google Scholar]
- ———. 2000. Politicians Don’t Pander: Political Manipulation and the Loss of Democratic Responsiveness. Chicago: University of Chicago Press. [Google Scholar]
- ———. 2005. “Polling Politics, Media, and Election Campaigns.” Public Opinion Quarterly 69:635–41. [Google Scholar]
- Jones Michael D., Jenkins-Smith Hank C. 2009. “Trans-Subsystem Dynamics: Policy Topography, Mass Opinion, and Policy Change.” Policy Studies Journal 37:37–58. [Google Scholar]
- Karol David. 2007. “Has Polling Enhanced Representation? Unearthing Evidence from the Literary Digest Issue Polls.” Studies in American Political Development 21:16–29. [Google Scholar]
- Keeter Scott. 2012. “Presidential Address: Survey Research, Its New Frontiers, and ______Democracy.” Public Opinion Quarterly 76:600–608. [Google Scholar]
- Keeter Scott, Kennedy Courtney, Dimock Michael, Best Jonathan, Craighill Peyton. 2006. “Gauging the Impact of Growing Nonresponse on Estimates from a National RDD Telephone Survey.” Public Opinion Quarterly 70:759–79. [Google Scholar]
- King Gary, Honaker James, Joseph Anne, Scheve Kenneth. 2001. “Analyzing Incomplete Political Science Data.” American Political Science Review 95:49–69. [Google Scholar]
- King Gary, Tomz Michael, Wittenberg Jason. 2000. “Making the Most of Statistical Analyses: Improving Interpretation and Presentation.” American Journal of Political Science 44:341–55. [Google Scholar]
- Kuklinski James H. 1978. “Representativeness and Elections: A Policy Analysis.” American Political Science Review 72:165–77. [Google Scholar]
- Kuklinski James H., Segura Gary M. 1995. “Endogeneity, Exogeneity, Time, and Political Representation.” Legislative Studies Quarterly 8:139–64. [Google Scholar]
- Lax Jeffrey R., Phillips Justin. 2012. “The Democratic Deficit in the States.” American Journal of Political Science 56:148–66. [Google Scholar]
- Manza Jeff, Brooks Clem. 2012. “How Sociology Lost Public Opinion: A Genealogy of a Missing Concept in the Study of the Political.” Sociological Theory 30:89–113. [Google Scholar]
- Manza Jeff, Cook Fay Lomax. 2002. a. “A Democratic Polity? Three Views of Policy Responsiveness to Public Opinion in the United States.” American Politics Research 30:630–67. [Google Scholar]
- ———. 2002. b. “The Impact of Public Opinion on Public Policy: The State of the Debate.” In Navigating Public Opinion: Polls, Policy, and the Future of American Democracy, edited by Manza Jeff, Cook Fay Lomax, Page Benjamin I., 17–32. New York: Oxford University Press. [Google Scholar]
- Marsden Peter V., Wright James D. 2010. Handbook of Survey Research, 2nd ed. Bingley, UK: Emerald Group. [Google Scholar]
- Mayhew David. 1991. Divided We Govern: Party Control, Lawmaking, and Investigations, 1946–1990. New Haven, CT: Yale University Press. [Google Scholar]
- McDonald Michael P., Popkin Samuel. 2001. “The Myth of the Vanishing Voter.” American Political Science Review 95:963–74. [Google Scholar]
- Miller Warren E., Stokes Donald E. 1963. “Constituency Influence in Congress.” American Political Science Review 57:45–56. [Google Scholar]
- Monroe Alan D. 1979. “Consistency between Public Preferences and National Policy Decisions.” American Politics Quarterly 7:3–19. [Google Scholar]
- ———. 1998. “Public Opinion and Public Policy, 1980–1993.” Public Opinion Quarterly 62:6–28. [Google Scholar]
- Mooney Christopher Z., Lee Mei-Hsien. 2000. “The Influence of Values on Consensus and Contentious Morality Policy: US Death Penalty Reform, 1956–1982.” Journal of Politics 62:223–39. [Google Scholar]
- Pacheco Julianna. 2013. “The Thermostatic Model of Responsiveness in the American States.” State Politics and Policy Quarterly 13:306–32. [Google Scholar]
- Page Benjamin I. 1994. “Democratic Responsiveness? Untangling the Links between Public Opinion and Policy.” PS: Political Science and Politics 27:25–29. [Google Scholar]
- ———. 2002. “The Semi-Sovereign Public.” In Navigating Public Opinion, edited by Manza Jeff, Cook Fay Lomax, Page Benjamin I., 325–44. New York: Oxford University Press. [Google Scholar]
- Page Benjamin I., Jacobs Lawrence R. 2009. Class War? What Americans Really Think about Economic Inequality. Chicago: University of Chicago Press. [Google Scholar]
- Page Benjamin I., Shapiro Robert Y. 1983. “Effects of Public Opinion on Policy.” American Political Science Review 77:175–90. [Google Scholar]
- Pitkin Hanna F. 1967. The Concept of Representation. Berkeley: University of California Press. [Google Scholar]
- Shapiro Robert Y. 2011. “Public Opinion and American Democracy.” Public Opinion Quarterly 75:982–1017. [Google Scholar]
- Shaw Greg M., Mysiewicz Sarah E. 2004. “Social Security and Medicare.” Public Opinion Quarterly 68:394–423. [Google Scholar]
- Simon Paul. 2006. Our Culture of Pandering. Carbondale: Southern Illinois Press. [Google Scholar]
- Smith Tom W. 1987. “The Art of Asking Questions, 1936–1985.” Public Opinion Quarterly 51:S95–S108. [Google Scholar]
- Soroka Stuart N., Wlezien Christopher. 2010. Degrees of Democracy: Politics, Public Opinion, and Policy. New York: Cambridge University Press. [Google Scholar]
- Stevens Daniel. 2002. “Public Opinion and Public Policy: The Case of Kennedy and Civil Rights.” Presidential Studies Quarterly 32:111–36. [Google Scholar]
- Stimson James A. 1991. Public Opinion in America. Boulder, CO: Westview. [Google Scholar]
- Weissberg Robert. 1978. “Collective vs. Dyadic Representation in Congress.” American Political Science Review 72:535–47. [Google Scholar]
- Wlezien Christopher. 1995. “The Public as Thermostat: Dynamics of Preferences for Spending.” American Journal of Political Science 39:981–1000. [Google Scholar]
- ———. 1996. “Dynamics of Representation: The Case of US Spending on Defense.” British Journal of Political Science 26:81–103. [Google Scholar]
- ———. 2004. “Patterns of Representation: Dynamics of Public Preferences and Policy.” Journal of Politics 66:1–24. [Google Scholar]



