“The FBI continues to report that armed citizens stopped only 14 of the 302 active shooter incidents that it identified for the period 2014-2022. The correct rate is almost eight times higher. And if we limit the discussion to places where permit holders were allowed to carry, the rate is eleven times higher,” wrote Lott. He further noted, “[O]ut of 440 active shooter incidents from 2014 to 2022, an armed citizen stopped 157. We also found that the FBI had misidentified five cases, usually because the person who stopped the attack was incorrectly identified as a security guard.”
This bit comes from the Crime Prevention Research Center. It pisses me off that the reader has to do the leg work for finding the article’s sources, but…their sources are there.
The outcome? Of the 18,000 agencies nationwide, only half filed a year’s worth of data, and only 63% put forth partial information. This made already dubious data even more deceptive and spotty; in fact, the country’s biggest cities, New York and Los Angeles, failed to report statistics to the FBI at all. Subsequently, the FBI often draws its conclusions using a small percentage of jurisdictions—or none—in a state to conjure up a national estimate. The lack of crime data from big cities can also lead to the perception that those cities (or the states they are in) are safer than they are.
Then…why don’t those jurisdictions report their data? That’s not the FBI’s fault. It works with what it has. Moreover, how as the CPRC fixed this problem? Are these jurisdictions reporting to the research center but not the FBI for some reason? Highly doubtful. So, I’m not sure why we’d trust the CPRC over the FBI.
At every stage of the data-collection process, bias and distortion can infect a narrative.
This is literally just the nature of statistical analysis. It’s why in my state people are like, “crime is definitely increasing!” and the police are like, “Our statistics show that crime is decreasing.” When multiple slashed tires are reported as a single instance of crime rather than an instance for each tire or each car, there’s going to be a difference in how crime is perceived between the public and officials.
Last year, the Centers for Disease Control and Prevention (CDC)—the supposedly non-partisan national public health agency of the United States—quietly deleted some data on defensive gun uses from its website.
This is an unsubstantiated claim. It’s their job to provide evidence for it since they made the claim. I have no reason to believe this is true.
With such skewed statistics and stories, this isn’t a fair debate; it is a political attack to take away our freedom. If we don’t stand up for accuracy and point out the problems with these government statistics, our rights will inevitably diminish under the canopy of a political agenda.
The conclusion here is woefully unsupported. While the statistics are skewed, they might be good enough. I’m not going to get into the problems of statistical analysis, but suffice it to say that sometimes, good enough is good enough. The article certainly didn’t offer a more accurate alternative.
And really, the implied argument that statistics that support gun control are an attack on our “freedom” is just straight, grade A, unadulterated nonsense. I really wish people would make that argument explicit rather than just asserting it like it’s true. And the slippery slope he derives, “our rights will inevitably diminish” is a cute rhetorical trick that tries and fails to absolve him of making the argument that it is in fact inevitable.
It’s worse than that. LA and NYC failed to report because of the NIBERS changeover but only because there’s a grace period for reporting to the FBI as they comply with the changeover. Everything about this article is dishonest.
Last year, the Centers for Disease Control and Prevention (CDC)—the supposedly non-partisan national public health agency of the United States—quietly deleted some data on defensive gun uses from its website.
This easily veers into the problems of statistical analysis.
The inclusion of the 2.5 million DGU numbers hinges on what counts as a defensive gun use. According to “The Reload”, on which your link is based:
GVA uses the most conservative criteria for what constitutes a defensive gun use. Instead of attempting to capture any time a person legally uses a gun to defend themselves or others, it only counts incidents that make it into media reports or police reports (though it’s unclear how many police reports they have access to). The site’s methodology takes a strikingly dismissive tone towards any other potential defensive gun uses.
But what’s Gary Kleck’s methodology, the means by which he estimated the 2.5 million DGUs?
By pure coincidence, The Reload doesn’t cover that explicitly. It merely alludes to the fact that he extrapolated that amount.
So, doing their research again since people can’t seem to do it themselves (also, thank god for AI…really makes this process go way faster), here’s analysis of their work by David Hemenway, a professor of health policy at Harvard.
I’m going to quote the entire “The Kleck-Gertz Survey” section of that paper:
In 1992, Kleck and Gertz conducted a national random-digit-dial survey of five thousand dwelling units, asking detailed questions about self-defense gun use. Their estimates of civilian self-defense gun use range from 1 million to 2.5 million times per year. The 2.5 million figure is the one they believe to be most accurate and the one Kleck has publicized, so that figure will be discussed in this paper.
K-G derive their 2.5 million estimate from the fact that 1.33% of the individuals surveyed reported that they themselves used a gun in self-defense during the past year; in other words, about 66 people out of 5000 reported such a use. Extrapolating the 1.33% figure to the entire population of almost 200 million adults gives 2.5 million uses.
Many problems exist with the survey conducted by Kleck and Gertz. A deficiency in their article is that they do not provide detailed information about their survey methodology or discuss its many limitations. For example, the survey was conducted by a small firm run by Professor Gertz. The interviewers presumably knew both the purpose of the survey and the staked-out position of the principal investigator regarding the expected results.
The article states that when a person answered, the interview was completed 61% of the time. But what happened when there was a busy signal, an answering machine, or no answer? If no one was interviewed at a high percentage of the initially selected homes, the survey cannot be relied on to yield results representative of the population. Interviewers do not appear to have questioned a random individual at a given telephone number, but rather asked to speak to the male head of the household. If that man was not at home, the caller interviewed the adult who answered the phone. Although this approach is sometimes used in telephone surveys to reduce expense, it does not yield a representative sample of the population.
The 2.5 million estimate is based on individuals rather than households. But the survey is randomized by dwelling unit rather than by the individual, so the findings cannot simply be extrapolated to the national population. Respondents who are the only adults in a household will receive too much weight.
K-G oversampled males and individuals from the South and West. The reader is presented with weighted rather than actual data, yet the authors do not explain their weighting technique. K-G claim their weighted data provide representative information for the entire country, but they appear to have obtained various anomalous results. For example, they find that only 38% of households in the nation possess a gun, which is low, outside the range of all other national surveys. They find that only 8.9% of the adult population is black, when 1992 Census data indicate that 12.5% of individuals were black.
The above limitations are serious. However, it is two other aspects of the survey that, when combined together, lead to an enormous overestimation of self-defense gun use: the fact that K-G are trying (1) to measure a very low probability event which (2) has positive social desirability response bias. The problem is one of misclassification.
Conducting a survey like Kleck did would be like if I did a survey of Trump support from /c/conservative, and took the proportion that said they do, and multiplied it by number of accounts in the Fediverse. Do you really think that’s representative of support for Trump across the fediverse? If you do, you’re just wrong. If you don’t, then you shouldn’t accept Kleck’s haphazardly generated 2.5 million number either.
The inclusion of the 2.5 million DGUs isn’t a political issue, though gun rights activists make it out be. It’s a one of statistics, and statisticians say his methodology is trash. No matter what you want to believe, no matter how hard, the 2.5 million DGU’s is far, far more probably false than it is true…
Great response! I’ll further add that the OP article does the exact same thing. It redefines the metric to be more favorable to its narrative despite not being the metric by which any agency in the country measures these numbers and then it fails to explain its own methodology and why that is more accurate. It’s dishonestly redefining its terms while ignoring all the issues inherent with this data in the first place.
Yeah, this article is trash…
But!
This bit comes from the Crime Prevention Research Center. It pisses me off that the reader has to do the leg work for finding the article’s sources, but…their sources are there.
Then…why don’t those jurisdictions report their data? That’s not the FBI’s fault. It works with what it has. Moreover, how as the CPRC fixed this problem? Are these jurisdictions reporting to the research center but not the FBI for some reason? Highly doubtful. So, I’m not sure why we’d trust the CPRC over the FBI.
This is literally just the nature of statistical analysis. It’s why in my state people are like, “crime is definitely increasing!” and the police are like, “Our statistics show that crime is decreasing.” When multiple slashed tires are reported as a single instance of crime rather than an instance for each tire or each car, there’s going to be a difference in how crime is perceived between the public and officials.
This is an unsubstantiated claim. It’s their job to provide evidence for it since they made the claim. I have no reason to believe this is true.
The conclusion here is woefully unsupported. While the statistics are skewed, they might be good enough. I’m not going to get into the problems of statistical analysis, but suffice it to say that sometimes, good enough is good enough. The article certainly didn’t offer a more accurate alternative.
And really, the implied argument that statistics that support gun control are an attack on our “freedom” is just straight, grade A, unadulterated nonsense. I really wish people would make that argument explicit rather than just asserting it like it’s true. And the slippery slope he derives, “our rights will inevitably diminish” is a cute rhetorical trick that tries and fails to absolve him of making the argument that it is in fact inevitable.
It’s worse than that. LA and NYC failed to report because of the NIBERS changeover but only because there’s a grace period for reporting to the FBI as they comply with the changeover. Everything about this article is dishonest.
https://www.nationalgunrights.org/resources/news/cdc-removed-defensive-gun-use-stats-after-pressure-from-anti-gunners/#:~:text=According to The Reload%2C last summer%2C the CDC,that stat to their website under “fast facts.”
This easily veers into the problems of statistical analysis.
The inclusion of the 2.5 million DGU numbers hinges on what counts as a defensive gun use. According to “The Reload”, on which your link is based:
But what’s Gary Kleck’s methodology, the means by which he estimated the 2.5 million DGUs?
By pure coincidence, The Reload doesn’t cover that explicitly. It merely alludes to the fact that he extrapolated that amount.
So, doing their research again since people can’t seem to do it themselves (also, thank god for AI…really makes this process go way faster), here’s analysis of their work by David Hemenway, a professor of health policy at Harvard.
I’m going to quote the entire “The Kleck-Gertz Survey” section of that paper:
Conducting a survey like Kleck did would be like if I did a survey of Trump support from /c/conservative, and took the proportion that said they do, and multiplied it by number of accounts in the Fediverse. Do you really think that’s representative of support for Trump across the fediverse? If you do, you’re just wrong. If you don’t, then you shouldn’t accept Kleck’s haphazardly generated 2.5 million number either.
The inclusion of the 2.5 million DGUs isn’t a political issue, though gun rights activists make it out be. It’s a one of statistics, and statisticians say his methodology is trash. No matter what you want to believe, no matter how hard, the 2.5 million DGU’s is far, far more probably false than it is true…
Great response! I’ll further add that the OP article does the exact same thing. It redefines the metric to be more favorable to its narrative despite not being the metric by which any agency in the country measures these numbers and then it fails to explain its own methodology and why that is more accurate. It’s dishonestly redefining its terms while ignoring all the issues inherent with this data in the first place.