For more than 60 years advocates for improving the substance of public policy in the US have argued that this goal can be best accomplished through the use of information. This argument emerged during the 1960s with adoption of the Planning, Programming and Budgeting System (PPBS). That initiative reflected a belief in the potential of analytic skills to devise a rational framework that could produce useful and agreed-upon data that would support policies that are most effective, most efficient and, even sometimes most equitable.
It assumed that such information was available to decision makers, that it was possible for participants to agree on cause and effect relationships, and that almost all activities could be quantified and measured. The set of assumptions underlying the PPBS system crossed political lines and had the support of both Republicans and Democrats.
The most recent example of this rationality-based strategy was found in the Report of the Commission on Evidence-Based Policymaking, a bipartisan group created by Congress in 2016 to develop a strategy for increasing the availability and use of data in order to build evidence about government programs that would be the basis of policy-making. The Commission’s report, issued in September, called for “a future in which rigorous evidence is created efficiently, as a routine part of government operations, and used to construct effective public policy.”
Indeed, it is more than a little ironic that such a Commission issued its report in the current political climate. Attempts to apply the norms and values of science to the public sector has always been difficult. But today, in a world of “fake news” charges and disputes about the accuracy and legitimacy of information, the barriers to success seem overwhelming.
If we are to understand the limitations of the effort to rely on what is often called neutral evidence, it is useful to review past experiences. The constraints established in the US constitutional structure require players to look for multiple perspectives. Many issues in our complex system involve players with different perspectives and impose limits on any one player’s ability to define the kinds of information deemed appropriate. Recent attempts to impose private sector values (especially profit) on a public system have created conflicts not envisioned by the US system architects.
A classic New Yorker cartoon probably provides the clearest definition of the problem: a drawing of a file cabinet with drawers labeled “Our facts”, “Absolute facts”, “Their facts”, “Bare facts”, “Neutral Facts, “Unsubstantiated facts”, “Disputable facts,” and “Indisputable facts.”
Despite these conflicts, many advocates for reliance on neutral information believe that data sources exist, or could emerge, that would allow decision-makers to avoid the constraints that emerge from the Constitution and the US political process. These advocates are convinced that they could draw on their academic training for a set of skills deemed appropriate for a data-driven process. At the same time, social scientists (including economists, psychologists and sociologists) began to appear on the evaluation and policy analysis staffs of multiple federal agencies.
Skeptics also appeared. Alice Rivlin, the first head of the Congressional Budget Office, reminisced about the early days in Washington of the evaluation field. She noted: “HEW in the late 60s was a wonderful place to be. The Congress had recently passed a raft of new programs. … Both advocates and evaluators were naïve by today’s standards… It gradually dawned on all of us that progress was going to be more complicated.”
Several well-known writers emerged during the late 1970s and early 1980s who provided a picture of that complexity. Lindblom and Cohen’s work, Useable Knowledge: Social Science and Social Problem Solving, contrasted the approach to information that emerges from academic social science with “ordinary knowledge”, the information that emerges from common sense speculation. They noted that too many policy analysts and researchers greatly underestimated the use and effectiveness of ordinary knowledge, laden as it was with intuition and more emotional responses and values.
Carol Weiss, a evaluation specialist, provided a somewhat different although related perspective. She characterized information tasks as the three “I”s – information, ideology and interests.” As early as 1983 she noted that “Observers who expect the subcategory of information that is social science research to have immediate and independent power in the process, and who bitterly complain about the intrusion of ‘politics’… into the use of research, implicitly hold a distorted view of how decisions are made.”
An Italian academic, Giandomenico Majone, in Evidence, Argument, and Persuasion in the Policy Process, substituted the idea of argument and evidence for the approach used by applied social scientists. Majone viewed the analyst as one who plays the role of participant and mediator rather than objective scientist. For him, evidence was much closer to the process applied in legal reasoning. Majone noted: “Argumentation is the key process through which citizens and policymakers arrive at moral judgments and policy choices.”
The combined message from these individuals, and others, led some academics and policy-makers to recognize the existence of what Majone called the “uncritical acceptance of the ‘scientific method’.” It had turned into a mechanistic process. The concept of “the public good” (developed by economist Paul Samuelson) seems to have vanished from the debate, suggesting that private sector values (profit and efficiency) have drowned out collective concerns within our society. Neither the recent Commission report nor much of the literature on evidence-based policy provides a reader with a useful sense of the insights and skepticism that had been developing. Yet the advocates of what has come to be termed “the evidence movement” continue to believe in its potential.
But this interest may not take us very far. Some have attempted to focus on an important aspect of this increased interest, namely expanding our understanding of what constitutes useful and useable evidence, not simply arguing that all information is appropriate. Such a focus may be a useful way of acknowledging that we are a society that contains multiple values. These diverse values challenge the ability of information, alone, to deal with the conflicts that emerge during policy discussion.
As a result, the concept of “evidence-based” decision-making cannot be disentangled from a series of value, structural, and political attributes that make such agreement difficult. We continue to search for ways to identify a range of approaches that are effective, efficient and most equitable to a diverse citizenry.
It assumed that such information was available to decision makers, that it was possible for participants to agree on cause and effect relationships, and that almost all activities could be quantified and measured. The set of assumptions underlying the PPBS system crossed political lines and had the support of both Republicans and Democrats.
The most recent example of this rationality-based strategy was found in the Report of the Commission on Evidence-Based Policymaking, a bipartisan group created by Congress in 2016 to develop a strategy for increasing the availability and use of data in order to build evidence about government programs that would be the basis of policy-making. The Commission’s report, issued in September, called for “a future in which rigorous evidence is created efficiently, as a routine part of government operations, and used to construct effective public policy.”
Indeed, it is more than a little ironic that such a Commission issued its report in the current political climate. Attempts to apply the norms and values of science to the public sector has always been difficult. But today, in a world of “fake news” charges and disputes about the accuracy and legitimacy of information, the barriers to success seem overwhelming.
If we are to understand the limitations of the effort to rely on what is often called neutral evidence, it is useful to review past experiences. The constraints established in the US constitutional structure require players to look for multiple perspectives. Many issues in our complex system involve players with different perspectives and impose limits on any one player’s ability to define the kinds of information deemed appropriate. Recent attempts to impose private sector values (especially profit) on a public system have created conflicts not envisioned by the US system architects.
A classic New Yorker cartoon probably provides the clearest definition of the problem: a drawing of a file cabinet with drawers labeled “Our facts”, “Absolute facts”, “Their facts”, “Bare facts”, “Neutral Facts, “Unsubstantiated facts”, “Disputable facts,” and “Indisputable facts.”
Despite these conflicts, many advocates for reliance on neutral information believe that data sources exist, or could emerge, that would allow decision-makers to avoid the constraints that emerge from the Constitution and the US political process. These advocates are convinced that they could draw on their academic training for a set of skills deemed appropriate for a data-driven process. At the same time, social scientists (including economists, psychologists and sociologists) began to appear on the evaluation and policy analysis staffs of multiple federal agencies.
Skeptics also appeared. Alice Rivlin, the first head of the Congressional Budget Office, reminisced about the early days in Washington of the evaluation field. She noted: “HEW in the late 60s was a wonderful place to be. The Congress had recently passed a raft of new programs. … Both advocates and evaluators were naïve by today’s standards… It gradually dawned on all of us that progress was going to be more complicated.”
Several well-known writers emerged during the late 1970s and early 1980s who provided a picture of that complexity. Lindblom and Cohen’s work, Useable Knowledge: Social Science and Social Problem Solving, contrasted the approach to information that emerges from academic social science with “ordinary knowledge”, the information that emerges from common sense speculation. They noted that too many policy analysts and researchers greatly underestimated the use and effectiveness of ordinary knowledge, laden as it was with intuition and more emotional responses and values.
Carol Weiss, a evaluation specialist, provided a somewhat different although related perspective. She characterized information tasks as the three “I”s – information, ideology and interests.” As early as 1983 she noted that “Observers who expect the subcategory of information that is social science research to have immediate and independent power in the process, and who bitterly complain about the intrusion of ‘politics’… into the use of research, implicitly hold a distorted view of how decisions are made.”
An Italian academic, Giandomenico Majone, in Evidence, Argument, and Persuasion in the Policy Process, substituted the idea of argument and evidence for the approach used by applied social scientists. Majone viewed the analyst as one who plays the role of participant and mediator rather than objective scientist. For him, evidence was much closer to the process applied in legal reasoning. Majone noted: “Argumentation is the key process through which citizens and policymakers arrive at moral judgments and policy choices.”
The combined message from these individuals, and others, led some academics and policy-makers to recognize the existence of what Majone called the “uncritical acceptance of the ‘scientific method’.” It had turned into a mechanistic process. The concept of “the public good” (developed by economist Paul Samuelson) seems to have vanished from the debate, suggesting that private sector values (profit and efficiency) have drowned out collective concerns within our society. Neither the recent Commission report nor much of the literature on evidence-based policy provides a reader with a useful sense of the insights and skepticism that had been developing. Yet the advocates of what has come to be termed “the evidence movement” continue to believe in its potential.
But this interest may not take us very far. Some have attempted to focus on an important aspect of this increased interest, namely expanding our understanding of what constitutes useful and useable evidence, not simply arguing that all information is appropriate. Such a focus may be a useful way of acknowledging that we are a society that contains multiple values. These diverse values challenge the ability of information, alone, to deal with the conflicts that emerge during policy discussion.
As a result, the concept of “evidence-based” decision-making cannot be disentangled from a series of value, structural, and political attributes that make such agreement difficult. We continue to search for ways to identify a range of approaches that are effective, efficient and most equitable to a diverse citizenry.