The Brennan Center released a report this week on crime and murder in the country with the headline “New Data: Crime and Murder Down in 2017” stating that “an analysis of new 2017 crime data shows that all measures of crime — overall crime, violence, and murder — are projected to decline this year.” I questioned some of the finding and methodology on twitter but thought it was important to expand on my problems with the report here.

The report takes data as available from the 30 largest cities in America and tries to assess where we are as of around midyear in 2017. It’s a process not all that unlike what I’ve done for FiveThirtyEight though with a smaller sample size. The Washington Post summarized the report with the somewhat breathless headline “2017 is on pace for the second-lowest crime rate since 1990 — and near-record low murders.”

I have a number of problems with both the reports methodology and how the report is being characterized.

First, we need to be upfront about what this report is and is not assessing as it is far from conclusive proof that crime is going to fall in 2017. The overall “crime” assertion comes from a projected fall in crime RATE (not crime) in 20 of the 30 big cities for which they made projections. Rather than projecting a drop in national crime they are simply projecting 20 big cities will see a 1.8 percent decline in UCR Part I crime from 2016 levels.

The rates for 2017 are projected based on stable population growth for each city based on the average calculated by the FBI from 2010 to 2015. That may or may not be true for most cities. If applied to New Orleans, for example, it would overstate the city’s population growth as it has been slowing considerably each of the last few years.

Without having the exact projected crime and population figures, therefore, it’s impossible to say whether they’re projecting the actual number of UCR Part I crimes to increase but the rate to decrease. The number of crimes and murders will almost certainly not reach historic lows. The rate of those crimes will probably remain near historic lows regardless of whether there is a rise or fall this year.

The second issue I have with the report centers around two cities: Detroit and Phoenix because projections for both cities rely on badly misleading underlying data (I erroneously asserted on twitter that they had included San Francisco’s 2016 count but not 2017 count in their calculations, that was a mistake on my part). The projections for Detroit relies on the city’s excellent open data portal. The problem is that Detroit’s open data portal is really, really slow to update its homicide incidents. Over the long term it’s a great tool for assessing trends, for the current year it’s bad at best and grossly misleading at worst.

For Detroit they projected a mere 220 murders in 2017 which would be the fewest murders there since 1966. The problem is that Detroit had 133 murders as of late June 2017 and a majority of murders would be expected to occur over the second half of the year. Applying their methodology to Detroit’s actual murder total would give that city roughly 294 murders rather than 220, basically erasing the entire national murder drop that is being projected.

For Phoenix they relied on the Major Cities Chiefs Association (MCCA) midyear murder count and simply doubled that count. Phoenix reported 30 murders in 2017 and 40 in 2016, so the report projected 60 in 2017 versus 80 in 2017. The problem is that Phoenix only reported for the first quarter to the MCCA. Phoenix had about 150 murders in 2016 and told me for my 538 piece that they had 58 murders through the end of May 2017 (versus 56 through the same timeframe in 2016).

Take those two cities out of the equation and you’re looking at a slight rise in murder among the sample rather than a slight fall. Either could happen, there simply is not enough quality data to say.

The final issue I have with the report regards how the projections are done. Brennan projected mostly second half figures by taking the YTD data and using “the ratio of crimes committed through this date last year to project the rest of the year, trying account for the midyear problem.”

That method might have some accuracy if done in November, but taking data from around midyear and projecting like this is wildly inaccurate. To show how inaccurate these projections might be I took midyear data for New Orleans from 2007 to 2016 and projected using this methodology. Here’s what I got:

Forecast

The misses tended to even out high and low with an average absolute miss of 11% above or below the final year total. Applying this method to midyear 2017 would project an astounding 257 murders in New Orleans for 2017 (we’re on pace for around 170 as of this writing after a summer slowdown). Some years — particularly 2013 — would’ve been accurate using this methodology, but on the whole this would not be a particularly good tool for projecting murder counts.

Again, it’s possible that murder will fall nationally in 2017. I identified murder being up about 3-4 percent in the 70 or so big cities I surveyed around midyear. Murder has slowed down in the last few months in New Orleans, Baltimore and Chicago, and a national murder drop is not out of the picture. If I had to guess I’d say the national murder count will be roughly what it was in 2016 +/- a few points, but that’s just an educated guess. At this point in the year that’s basically all we can do with the available data.

In conclusion, I’m not saying that Brennan’s projection can’t come true on a national scale. I’m just saying that a forecast like that requires gigantic error bars considering the small size of the sample. Collecting murder data in near real time is a needlessly difficult task which increases the likelihood of collection errors.

Given the challenges of collecting the data, the small sample size and the immense difficulty of projecting year end murder trends for the entire country it is important to be overly generous with estimative language and margin of error when presenting a forecast like this. Instead this report is being presented as definitive proof of an outcome that may or may not come true.