To appear in Public Adiminstration Review 2015
The death of Freddie Gray in Baltimore has made many of us search for the reasons that led to that horrible conclusion. Many explanations have been advanced that are important and seek to help citizens understand what led to that event. The issues that have been raised are not new: here was a population that was forgotten and suffered in terms of jobs, housing and educational opportunities. The result is a lost generation that has been denied opportunities in the US. These are structural problems that even the best efforts by community groups cannot tackle on their own.
But there are also less obvious explanations for this tragedy. It is very ironic that efforts to make government more effective have sometimes led to increased problems for this population. The press coverage since Gray’s death has pointed to administrative policies and processes put in place by former Baltimore Mayor Martin O’Malley that some feel contributed to the explosion we have witnessed on the streets of Baltimore. But O’Malley’s efforts are not unique. What O’Malley brought to Baltimore actually mirrored efforts in NYC that became the model for public administrators across the country who have tried to respond to citizen concern about the effectiveness of public programs. While much of the attention today is on the criminal justice system, similar problems have emerged from other sectors.
O’Malley’s effort – called Citistat – was built on an effort in New York City entitled COMPSTAT (short for complaint statistics). This was a management technique based on private sector concepts that was designed to lead to crime reduction and efficient use of resources. The process emphasized accountability of staff to the top officials in the organization and was used to map crime and identify problems in the criminal justice system.
Staff from local precincts met with top officials weekly and discussed ways to reduce crime. Each local precinct was required to develop a statistical summary of the week’s crime complaints and experiences. They were also required to submit a weekly report describing significant cases, crime patterns and other relevant police activities.
While this process gave the participants the impression that they were improving the delivery of services to the community, there were hidden traps within that process. A number of these attributes are found in other attempts to develop performance assessment efforts.
First, there was a tendency in that process to think about the city as a centralized unit and to develop expectations that applied citywide. This is sometimes called a “one size fits all” approach. This process minimized attention to the special needs of different populations and the way those populations perceived the police and, conversely, how the police perceived them.
Second, the reliance on quantitative data minimized attention to qualitative assessments of the service, particularly issues related to the quality of treatment. When the New York City police department initially emphasized stop-and frisk, it produced 50,000 stops a year. The department decided that these stops were a good metric for productivity and the annual number ultimately exceeded half a million stops. Many police officers acknowledged that this policy produced many unjustified stops.
Third, most of the measures that were used in the process were based on attempts to maximize efficiency in the delivery of service. Increased efficiency was often expected to lead to budget reductions. There was little attention to the effectiveness of programs or equity aspects of the service delivery. Some have described this as worshipping the gods of efficiency and ignoring issues of equity.
Fourth, the process that was put in place created pressure on individual police officers to meet the centralized expectations. Salary decisions were sometimes linked to these assessments. At least one assessment of the NYC experience found that a number of retired police officials complained that pressure from department brass prompted widespread statistical manipulation of CompStat data, specifically by downgrading reports of serious crimes to less serious offenses.
Management strategies such as Compstat clearly have positive motivations behind them. However, the unanticipated consequences of these efforts cannot be ignored. The belief that managing only what you can count can be misleading. The Baltimore tragedy suggests that we need different techniques that acknowledge the complexity of many programs. We must acknowledge that what may work in one setting is not helpful in another.
This is not an easy problem to solve. It is clear that relying only on quantitative measures has real limitations. The challenge is to find a way to strike a balance between quantitative measures and qualitative assessments. This brings us back to one of the basic issues in the public administration field: acknowledging that management is both an art as well as a science.
But there are also less obvious explanations for this tragedy. It is very ironic that efforts to make government more effective have sometimes led to increased problems for this population. The press coverage since Gray’s death has pointed to administrative policies and processes put in place by former Baltimore Mayor Martin O’Malley that some feel contributed to the explosion we have witnessed on the streets of Baltimore. But O’Malley’s efforts are not unique. What O’Malley brought to Baltimore actually mirrored efforts in NYC that became the model for public administrators across the country who have tried to respond to citizen concern about the effectiveness of public programs. While much of the attention today is on the criminal justice system, similar problems have emerged from other sectors.
O’Malley’s effort – called Citistat – was built on an effort in New York City entitled COMPSTAT (short for complaint statistics). This was a management technique based on private sector concepts that was designed to lead to crime reduction and efficient use of resources. The process emphasized accountability of staff to the top officials in the organization and was used to map crime and identify problems in the criminal justice system.
Staff from local precincts met with top officials weekly and discussed ways to reduce crime. Each local precinct was required to develop a statistical summary of the week’s crime complaints and experiences. They were also required to submit a weekly report describing significant cases, crime patterns and other relevant police activities.
While this process gave the participants the impression that they were improving the delivery of services to the community, there were hidden traps within that process. A number of these attributes are found in other attempts to develop performance assessment efforts.
First, there was a tendency in that process to think about the city as a centralized unit and to develop expectations that applied citywide. This is sometimes called a “one size fits all” approach. This process minimized attention to the special needs of different populations and the way those populations perceived the police and, conversely, how the police perceived them.
Second, the reliance on quantitative data minimized attention to qualitative assessments of the service, particularly issues related to the quality of treatment. When the New York City police department initially emphasized stop-and frisk, it produced 50,000 stops a year. The department decided that these stops were a good metric for productivity and the annual number ultimately exceeded half a million stops. Many police officers acknowledged that this policy produced many unjustified stops.
Third, most of the measures that were used in the process were based on attempts to maximize efficiency in the delivery of service. Increased efficiency was often expected to lead to budget reductions. There was little attention to the effectiveness of programs or equity aspects of the service delivery. Some have described this as worshipping the gods of efficiency and ignoring issues of equity.
Fourth, the process that was put in place created pressure on individual police officers to meet the centralized expectations. Salary decisions were sometimes linked to these assessments. At least one assessment of the NYC experience found that a number of retired police officials complained that pressure from department brass prompted widespread statistical manipulation of CompStat data, specifically by downgrading reports of serious crimes to less serious offenses.
Management strategies such as Compstat clearly have positive motivations behind them. However, the unanticipated consequences of these efforts cannot be ignored. The belief that managing only what you can count can be misleading. The Baltimore tragedy suggests that we need different techniques that acknowledge the complexity of many programs. We must acknowledge that what may work in one setting is not helpful in another.
This is not an easy problem to solve. It is clear that relying only on quantitative measures has real limitations. The challenge is to find a way to strike a balance between quantitative measures and qualitative assessments. This brings us back to one of the basic issues in the public administration field: acknowledging that management is both an art as well as a science.